Rationality Quotes April 2012
Here’s the new thread for posting quotes, with the usual rules:
Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself
Do not quote comments/posts on LW/OB
No more than 5 quotes per person per monthly thread, please.
- 6 Feb 2014 1:36 UTC; 8 points) 's comment on [LINK] Cliffs Notes: “Probability Theory: The Logic of Science”, part 1 by (
- 17 Apr 2012 7:05 UTC; 4 points) 's comment on Be Happier by (
Paul Dirac
Excellent quote.
Dick Teresi, The Undead
You’re going to die.
Or maybe not.
I like the first video, but I wish it ended at 4:20. It reminds me a lot of Ecclesiastes, which is a refreshingly honest essay about the meaning of life, with the moral “and therefore you should do what God wants you to do” tacked on at the end by an anonymous editor.
On counter-signaling, how not to do:
-- The Irish Independent, “News In Brief”
Maybe the guy had been reading too much Edgar Allan Poe? As a child, I loved “The Purloined Letter” and tried to play that trick on my sister—taking something from her and hiding it “in plain sight”. Of course, she found it immediately.
ETA: it was a girl, not a guy.
I find it highly unlikely that this is the whole story. Surely the police are not licensed to investigate a car based solely on its vanity plate and where it was parked...
You are probably right that more information drew police attention to the car, but “near the border” gets one most of the way to legally justified. In the 1970s, the US Supreme Court explicitly approved a permanent checkpoint approximately 50 miles north of the Mexican border.
Well that’s a rather depressing piece of law...
Chris Bucholz
Mostly agreed. If I were to stand on a soapbox and say “light with a wavelength of 523.4371 nm is visible to the human eye”, it would fall into the category of an unsubstantiated claim by a single person. But it is implied by the general knowledge that the human visual range is from roughly 400 nm to roughly 700 nm, and that has been confirmed by anyone who has looked at a spectrum with even crude wavelength calibration.
Shouldn’t that say that it is the same?
-Carl Rogers, On Becoming a Person: A Therapist’s View of Psychotherapy (1961)
Facts are friendly on average, that is. Individual pieces of evidence might lead you to update towards a wrong conclusion. /nitpick
Even then we could potentially nitpick even further, depending on what is meant by ‘average’.
Excellent point.
A while ago I saw a good post or quote on LW on the problem of confusing a phrase one uses to encapsulate an insight with the insight itself. Unfortunately I don’t remember where.
Knowing about evolution is pretty cool, but I’d be a lot more satisfied if I could believe that we were created as the pinnacle of design by a super-awesome Thing that had a specific plan in mind (and that my nation—and, come to that, tribe -was even more pinnacle than everyone else).
...and if it turned out that believing that particular falsehood didn’t have consequences that left you less satisfied.
Okay, hypothetical: Dying human. They believed in God their entire life and have lived as basically decent according to their own ethics, and therefore think they’re going to be blissing out for the rest of infinity. They will believe this for the next couple of minutes, and then stop existing.
Would you, given the opportunity, dispel their illusion?
Depends on what I expected the result of doing so to be.
If I expected the result to be that they are more unhappy than they otherwise would be for the rest of their lives with no other compensating benefit (which is certainly the conclusion your hypothetical encourages), then no I wouldn’t.
If I expected the result to be either that they are happier than they otherwise would be for the rest of their lives, or that there is some other compensating benefit to them knowing what will actually happen, then yes I would.
Why do you ask?
Because this is (to my mind) an example of a situation where the facts aren’t friendly and the truth is harmful—thus (hopefully) justifying my objection to the original quote.
OK. Thanks for clarifying.
Dispel all their illusions, including the one that assigned negative utility to unavoidable dying. There are better things to do with 2 minutes than expecting fun you won’t receive.
If you know of any illusions that give inevitably ceasing to exist negative utility to someone leading a positive-utility life, I would love to have them dispelled for me.
Sorry for the slow reply.
Hmm. I may be a bit biased because I don’t really have a high valuation on being alive as such (which is to say utility[X] is nearly the same as utility[X and Julian is alive] for me, all other things being equal—it’s why I am not signed up for cryonics).
However I think that any utility calculus that negatively values the fun you’re not going to have when inevitably dead is as silly as negatively valuing the fun you didn’t get to have because said events preceded your birth, and you inevitably can’t extend your life into the past. You get more chance to fulfil your values in the real world by making use of your 2 minutes than by anticipating values that are not going to happen. And I do very much place utility on my values being fulfilled in a real, rather than self deceptive way.
Yes, the whole statement has an implicit “In the real world” premise.
I’d be happy if I had a magic wand that could violate the second law of thermodynamics, but in the real world . . .
I wasn’t clear. Believing that would make me happy even if it wasn’t true. There’s no reason to assume reality would be nice enough to only hand us facts that we find satisfying.
If you happen to have a brain that finds the process of learning more satisfying than any possible falsehood, then that’s great… But I don’t think many people have that advantage.
There’s a substantial minority in the community that dislikes the Litany of Gendlin, so you have plenty of company here.
But even granting the premise that believing true things conflicts with being happy, believing true things has been useful for achieving every other type of goal. So it seems like you are endorsing trading off achievement of other goals in order to maximize happiness. Without challenging your decision to adopt particular terminal values, I am unsure if your chosen tradeoff is sustainable.
I’m not endorsing that, for exactly the reason you said: knowing stuff, on average, will let you achieve your goals. The original quote, though, stated that the truth is “never unsatisfying”, which seemed to me to be a false statement.
You sound pretty confident that, if you believed that we were created as the pinnacle of design by a super-awesome Thing that had a specific plan in mind, and that your nation/tribe was even more pinnacle than everyone else, you would be happier than you are now.
Can you clarify your reasons for believing that? I mean, I grew up with a lot of people who believe that, and as a class they didn’t seem noticeably happier than the people who didn’t, so I’m inclined to doubt it. But I’m convinceable.
You got me, since during the time I did believe that I was a lot less happy than I am now, because that falsehood was part of a whole set of falsehoods which led to annoying obligations. But I do distinctly remember being satisfied with knowing the ultimate goal of the universe and my place in it, and how realising the truth made me feel unsatisfied.
The statement “the truth is never an unsatisfying thing” seems to be affect-heuristic reasoning: going from “truth is useful” to “truth is good” to “truth always feels good to know”.
Sure. To the extent that you’re simply arguing that the initial quote overreaches, I’m not disagreeing with you. But you seemed to be making more positive claims about the value of ignorance.
Richard Hamming
It surprises people like Greg Egan, and they’re not entirely stupid, because brains are Turing complete modulo the finite memory—there’s no analogue of that for visible wavelengths.
If this weren’t Less Wrong, I’d just slink away now and pretend I never saw this, but:
I don’t understand this comment, but it sounds important. Where can I go and what can I read that will cause me to understand statements like this in the future?
When speaking about sensory inputs, it makes sense to say that different species (even different individuals) have different ranges, so one can percieve something and other can’t.
With computation it is known that sufficiently strong programming languages are in some sense equal. For example, you could speak about relative advantages of Basic, C/C++, Java, Lisp, Pascal, Python, etc., but in each of these languages you can write a simulator of the remaining ones. This means that if an algorithm can be implemented in one of these languages, it can be implemented in all of them—in worst case, it would be implemented as a simulation of another language running its native implementation.
There are some technical details, though. Simulating another program is slower and requires more memory than the original program. So it could be argued that on a given hardware you could do a program in language X which uses all the memory and all available time, so it does not necessarily follow that you can do the same program in language Y. But on this level of abstraction we ignore hardware limits. We assume that the computer is fast enough and has enough memory for whatever purpose. (More precisely, we assume that in available time a computer can do any finite number of computation steps; but it cannot do an infinite number of steps. The memory is also unlimited, but in a finite time you can only manage to use a finite amount of memory.)
So on this level of abstraction we only care about whether something can or cannot be implemented by a computer. We ignore time and space (i.e. speed and memory) constraints. Some problems can be solved by algorithms, others can not. (Then, there are other interesting levels of abstraction which care about time and space complexity of algorithms.)
Are all programming languages equal in the above sense? No. For example, although programmers generally want to avoid infinite loops in their programs, if you remove a potential for infinite loops from the programming language (e.g. in Pascal you forbid “while” and “repeat” commands, and a possibility to call functions recursively), you lose ability to simulate programming languages which have this potential, and you lose ability to solve some problems. On the other hand, some universal programming languages seem extremely simple—a famous example is a Turing machine. This is very useful, because it is easier to do mathematical proofs about a simple language. For example if you invent a new programming language X, all you have to do to prove its universality, is to write a Turing machine simulator, which is usually very simple.
Now back to the original discussion… Eliezer suggests that brain functionality should be likened to computation, not to sensory input. A human brain is computationally universal, because (given enough time, pen and paper) we can simulate a computer program, so all brains should be equal when optimally used (differing only in speed and use of resources). In another comment he adds that ability to compute isn’t the same as ability to understand. Therefore (my conclusion) what one human can understand, another human can at least correctly calculate without understanding, given a correct algorithm.
Wow. That’s really cool, thank you. Upvoted you, jeremysalwen and Nornagest. :)
Could you also explain why the HPMoR universe isn’t Turing computable? The time-travel involved seems simple enough to me.
Not a complete answer, but here’s commentary from a ffdn review of Chapter 14:
I got the impression that what “not Turing-computable” meant is that there’s no way to only compute what ‘actually happens’; you have to somehow iteratively solve the fixed-point equation, maybe necessarily generating experiences (waves hands confusedly) corresponding to the ‘false’ timelines.
Sounds rather like our own universe, really.
There’s also the problem of an infinite number of possible solutions.
The number of solutions is finite but (very, very, mind-bogglingly) large.
Ah. It’s math.
:) Thanks.
A computational system is Turing complete if certain features of its operation can reproduce those of a Turing machine, which is a sort of bare-bones abstracted model of the low-level process of computation. This is important because you can, in principle, simulate the active parts of any Turing complete system in any other Turing complete system (though doing so will be inefficient in a lot of cases); in other words, if you’ve got enough time and memory, you can calculate anything calculable with any system meeting a fairly minimal set of requirements. Thanks to this result, we know that there’s a deep symmetry between different flavors of computation that might not otherwise be obvious. There are some caveats, though: in particular, the idealized version of a Turing machine assumes infinite memory.
Now, to answer your actual question, the branch of mathematics that this comes from is called computability theory, and it’s related to the study of mathematical logic and formal languages. The textbook I got most of my understanding of it from is Hopcroft, Motwani, and Ullman’s Introduction to Automata Theory, Languages, and Computation, although it might be worth looking through the “Best Textbooks on Every Subject” thread to see if there’s a consensus on another.
Curious, does “memory space” mean something more than just “memory”?
Just a little more specific. Some people may hear “memory” and associate it with, say, the duration of their memory rather than how many can be physically held. For example when a human is said to have a ‘really good memory’ we don’t tend to be trying to make a claim about the theoretical maximum amount of stuff they could remember.
No, although either or both might be a little misleading depending on what connotations you attach to it: an idealized Turing machine stores all its state on a rewritable tape (or several tapes, but that’s equivalent to the one-tape version) of symbols that’s infinite in both directions. You could think of that as analogous to both memory and disk, or to whatever the system you’re actually working with uses for storage.
Right, I know that. Was just curious why the extra verbiage in a post meant to explain something.
Because it’s late and I’m long-winded. I’ll delete it.
https://en.wikipedia.org/wiki/Turing_completeness
What does that statement mean in the context of thoughts?
That is, when I think about human thoughts I think about information processing algorithms, which typically rely on hardware set up for that explicit purpose. So even though I might be able to repurpose my “verbal manipulation” module to do formal logic, that doesn’t mean I have a formal logic module.
Any defects in my ability to repurpose might be specific to me: I might able to think the thought “A-> B, ~A, therefore ~B” with the flavor of trueness, and another person can only think that thought with the flavor of falseness. If the truth flavor is as much a part of the thought as the textual content, then the second thinker cannot think the thought that the first thinker can.
Aren’t there people who can hear sounds but not music? Are their brains not Turing complete? Are musical thoughts ones they cannot think?
It means nothing, although Greg Egan is quite impressed by it. Sad but true: Someone with an IQ of, say, 90 can be trained to operate a Turing machine, but will in all probability never understand matrix calculus. The belief that Turing-complete = understanding-complete is false. It just isn’t stupid.
It doesn’t mean nothing; it means that people (like machines) can be taught to do things without understanding them.
(They can also be taught to understand, provided you reduce understanding to Turing-machine computations, which is harder. “Understanding that 1+1 = 2” is not the same thing as being able to output “2″ to the query “1+1=”.)
I would imagine that he can be taught matrix calculus, given sufficient desire (on his and the teachers’ parts), teaching skill, and time. I’m not sure if in practice it is possible to muster enough desire or time to do it, but I do think that understanding is something that can theoretically be taught to anyone who can perform the mechanical calculations.
Have you ever tried to teach math to anyone who is not good at math? In my youth I once tutored a woman who was poor, but motivated enough to pay $40/session. A major obstacle was teaching her how to calculate (a^b)^c and getting her to reliably notice that minus times minus equals plus. Despite my attempts at creative physical demonstrations of the notion of a balanced scale, I couldn’t get her to really understand the notion of doing the same things to both sides of a mathematical equation. I don’t think she would ever understand what was going on in matrix calculus, period, barring “teaching methods” that involve neural reprogramming or gain of additional hardware.
Your claim is too large for the evidence you present in support of it.
Teaching someone math who is not good at math is hard, but “will in all probability never understand matrix calculus”!? I don’t think you’re using the Try Harder.
Assume teaching is hard (list of weak evidence: it’s a three year undergraduate degree; humanity has hardly allowed itself to run any proper experiments in the field, and those that have been run seem usually to be generally ignored by professional practitioners; it’s massively subject to the typical mind fallacy and most practitioners don’t know that fallacy exists). That you, “in your youth” (without having studied teaching), “once” tutored a woman who you couldn’t teach very well… doesn’t support any very strong conclusion.
It seems very likely to me that Omega could teach matrix calculus to someone with IQ 90 given reasonable time and motivation from the student. One of the things I’m willing to devote significant resources to in the coming years is making education into a proper science. Given the tools of that proper science I humbly submit that you could teach your former student a lot. Track the progress of the Khan Academy for some promising developments in the field.
What are the experiments that are generally ignored?
Some of it is weak evidence for the hardness claim (3 years degree), some against (all the rest). Does that match what you meant?
I’d intended a different meaning of “hard”. On reflection your interpretation seems a very reasonable inference from what I wrote.
What I meant: Teaching is hard enough that you shouldn’t expect to find it easy without having spent any time studying it. Even as a well educated westerner, the bits of teaching you can reasonably expect to pick up won’t take you far down the path to mastery.
(Thank you for you comment—it got me thinking.)
No, I haven’t, and reading your explanation I now believe that there is a fair chance you are correct. However, one problem I have with it is that you’re describing a few points of frustration, some of which I assume you ended up overcoming. I am not entirely convinced that had she spent, say one hundred hours studying each skill that someone with adequate talent could fully understand in one, she would not eventually fully understand it.
In cases of extreme trouble, I can imagine her spending forty hours working through a thousand examples, until mechanically she can recognise every example reasonably well, and find the solution correctly, then another twenty working through applications, then another forty hours analysing applications in the real world until the process of seeing the application, formulating the correct problem, and solving it becomes internalised. Certainly, just because I can imagine it doesn’t make it true, but I’m not sure on what grounds I should prefer the “impossibility” hypothesis to the “very very slow learning” hypothesis.
I can’t imagine how hard it would be to learn math without the concept of referential transparency.
Not all that hard if that’s the only sticking point. I acquired it quite late myself.
What was your impression of her intelligence otherwise?
Suzette Haden Elgin (a science fiction author and linguist who was quite intelligent with and about words) described herself as intractably bad at math.
This anecdote gives very little information on its own. Can you describe your experience teaching math to other people—the audience, the investment, the methods, the outcome? Do you have any idea whether that one woman eventually succeeded in learning some of what you couldn’t teach her, and if so, how?
(ETA: I do agree with the general argument about people who are not good at math. I’m only saying this particular story doesn’t tell us much about that particular woman, because we don’t know how good you are at teaching, etc.)
I fear you’re committing the typical mind fallacy. The dyscalculic could simulate a Turing machine, but all of mathematics, including basic arithmetic, is whaargarbl to them. They’re often highly intelligent (though of course the diagnosis is “intelligent elsewhere, unintelligent at maths”), good at words and social things, but literally unable to calculate 17+17 more accurately than “somewhere in the twenties or thirties” or “I have no idea” without machine assistance. I didn’t believe it either until I saw it.
Do you find this harder to believe than, say, aphasia? I’ve never seen it, but I have no difficulty believing it.
Well, I certainly don’t disbelieve in it now. I first saw it at eighteen, in first-year psychology, in the bit where they tried to beat basic statistics into our heads.
I can’t imagine how hard it is to learn to program if you don’t instinctively know how. Yet I know it is that hard for many people. Some succeed in learning, some don’t. Those who do still have big differences in ability, and ability at a young age seems to be a pretty good predictor of lifetime ability.
I realize I must have learned the basics at some point, although I don’t remember it. And I remember learning many more advanced concepts during the many years since. But for both the basics and the advanced subjects, I never experienced anything I can compare to what I’d call “learning” in other subjects I studied.
When programming, if I see/read something new, I may need some time (seconds or hours) to understand it, then once I do, I can use it. It is cognitively very similar to seeing a new room for the first time. It’s novel, but I understand it intuitively and in most cases quickly.
When I studied e.g. biology or math at university, I had to deliberately memorize, to solve exercises before understanding the “real thing”, to accept that some things I could describe I couldn’t duplicate by building them from scratch no matter how much time I had and what materials and tools. This never happened to me in programming. I may not fully understand the domain problem that the program is manipulating. But I always understand the program itself.
And yet I’ve seen people struggle to understand the most elementary concepts of programming, like, say, distinguishing between names and values. I’ve had to work with some pretty poor programmers, and had the official job of on-the-job mentoring newbies on two occasions. I know it can be very difficult to teach effectively, it can be very difficult to learn.
Given that I encountered a heavily preselected set of people, who were trying to make programming their main profession, it’s easy for me to believe that—at the extreme—for many people elementary programming is impossible to learn, period. And the same should apply to math and any other “abstract” subject for which biologically normal people don’t have dedicated thinking modules in their brains.
I’m not sure what you mean by understanding-complete, but remember that the turing-complete system is both the operator and any machinery they are manipulating.
So you are considering a man in a Chinese room to lack understanding?
Obviously the man in the Chinese room lacks understanding, by most common definitions of understanding. It is the room as a system which understands Chinese. (Assuming lookup tables can understand. By functional definitions, they should be able to.)
But with a person it becomes a bit more complicated because it depends on what we are referring to when we say their name. I was trying to make an allusion to Blindsight.
It means you could, in theory, run an AI on them (slowly).
FWIW I’ve read a study that says about 50% of people can’t tell the difference between a major and a minor chord even when you label them happy/sad. [ETA: Happy/sad isn’t the relevant dimension, see the replies to this comment.] I have no idea how probable that is, but if true it would imply that half of the American population basically can’t hear music.
http://languagelog.ldc.upenn.edu/nll/?p=2074
It shocked the hell out of me, too.
This is weird. It is hard for me to hear the difference in the cadence, but crystal clear otherwise. In the cadence, the problem for me is that the notes are dragging on, like when you press pedal on piano a bit, that makes it hard to discern the difference.
Maybe they lost something in retelling here? Made up new stimuli for which it doesn’t work because of harmonics or something?
Or maybe its just me and everyone on this thread? I have a lot of trouble hearing speech through noise (like that of flowing water), i always have to tell others, i am not hearing what you’re saying i am washing the dishes. Though i’ve no idea how well other people can hear something when they are washing the dishes; maybe i care too much not to pretend to listen when i don’t hear.
This needs proper study.
The following recordings are played on an acoustic instrument by a human (me), and they have spaces in between the chords. The chord sequences are randomly generated (which means that the major-to-minor ratio is not necessarily 1:1, but all of them do have a mixture of major and minor chords).
Each of the following two recordings is a sequence of eight C major or C minor chords:
major-minor-1.mp3
major-minor-2.mp3
Each of the following two recordings is a sequence of eight “cadences” -- groups of four chords that are either
F B♭ C F
or
F B♭ Cminor F
cadences-1.mp3
cadences-2.mp3
Edit: Here’s a listing of the chords in all four sound files.
Edit 2 (2012-Apr-22): I added another recording that contains these chords:
repeated over and over, while the balance between the voices is varied, from “all voices roughly equal” to “only the second voice from the top audible”. The second voice from the top is the only one that is different on the C minor chord. My idea is that hearing the changing voice foregrounded from its context like this might make it easier to pick it out when it’s not foregrounded.
Ditto for me—The difference between the two chords is crystal clear, but in the cadence I can barely hear it.
I’m not a professional, but I sang in school chorus for 6 years, was one of the more skilled singers there, I’ve studied a little musical theory, and I apparently have a lot of natural talent. And the first time I heard the version played in cadence I didn’t notice the difference at all. Freaky. I know how that post-doc felt when she couldn’t hear the difference in the chords.
I added another recording. See “Edit 2” in this comment for an explanation.
Nope, the audio examples are all straightforward realizations of the corresponding music notation. (They are easy for me to tell apart.)
Still, the notes drag on, the notes have harmonics, etc. It is not pure sine waves that abruptly stop and give time for the ear to ‘clear’ of afterimage-like sound.
I hear the difference in the cadence, it’s just that I totally can’t believe it can possibly be clearer than just the one chord then another chord. I can tell apart just the two chords at much lower volume level and/or paying much less attention.
I am with you on easily telling the two apart in the original chords but being unable to reliably tell the difference in the cadence version.
I’ve had between a dozen and two dozen music students over the years. (Guitar and bass guitar.) Some of them started out having trouble telling the difference between ascending and descending intervals. (In other words, some of them had bad ears.) All of them improved, and all of them, with practice, were able to hear me play something and play it back by ear. I’m sure there are some people who are neurologically unable to do this, but in general, it is a learnable skill.
The cognitive fun! website has a musical interval exercise.
Edit: One disadvantage to that exercise/game for people who aren’t already familiar with the intervals is that it doesn’t have you differentiate between major and minor intervals. (So if you select e.g. 2 and 8 as your intervals, you’ll be hearing three different intervals, because some of the 2nds will be minor rather than major.) Sooner or later I’ll write my own interval game!
is this what you’re looking for?
http://www.musictheory.net/exercises/ear-interval
That’s pretty cool. Are there keybindings?
I don’t know, doesn’t look like it.
Likewise.
I was going to comment about how the individual chords were clearly different to my ear but the “stereotypical I-IV-V-I cadential sequences” were indistinguishable, precisely the reverse of the experience the Bell Labs post doc reportedly reported. Then I read the comments on the article and realized this is fairly common, so I deleted the comment. Then I decided to comment on it anyway. Now I have.
I had to listen to that second part several times before I could pick up the difference too. They sound equivalent unless I concentrate.
And me. I guess—as most probable explanation—they just lost something crucial in retelling. The notes drag on a fair bit in the second part. I can hear the difference if I really concentrate. But its ilke a typo in the text. If the text was blurred.
The second sequence sounded jarringly wrong to me, FWIW.
At first, I found it unbelievable. Then, I remembered that I have imperfect perfect pitch: I learned both piano and french horn; the latter of which is transposed up a perfect fourth. Especially when I’m practicing regularly, I can usually name a note or simple chord when I hear it; but I’m often off by a perfect fourth.
Introspecting on the difference between being right about a note and wrong about a note makes me believe people can confuse major and minor, but still enjoy music.
Might have something to do with the fact that happy/sad is neither an accurate nor an encompassing description of the uses of major/minor chords, unless you place a C major and a C or A minor directly next to each other. I for one find that when I try to tell the difference solely on that basis, I might as well flip a coin and my success rate would go down only slightly. When I come at it from other directions and ignore the emotive impact, my success rate is much higher.
In short: Your conclusion doesn’t follow from the evidence.
I stated the evidence incorrectly, look at the uncle/aunt of your comment (if you haven’t already) for the actual evidence.
Yeah, I spotted that after making my comment, but after that I wasn’t sure whether you were citing the same source material or no. The actual evidence does say a lot more about how humans (don’t?) perceive musical sounds. Thanks for clarifying, though.
I’m curious; 50% of what sample? total human population or USians or what?
There’s the halting problem, so here you go. There’s also the thoughts that you’ll never arrive at because your arriver at the thoughts won’t reach them, even if you could think them if told of them.
In Pinker’s book “How the Mind Works” he asks the same question. His observation (as I recall) was that much of our apparently abstract logical abilities are done by mapping abstractions like math onto evolved subsystems with different survival purposes in our ancestors: pattern recognition, 3D spatial visualization, etc. He suggests that some problems seem intractable because they don’t map cleanly to any of those subsystems.
Because thoughts don’t behave much like perceptions at all, so that wouldn’t occur to us or convince us much once we hear it. Are there any thoughtlike things we don’t get but can indirectly manipulate?
Extremely large numbers.
(among other things)
Parity transforms as rotations in four-dimensional space.
Can you expand on what you mean by that? There are many ways in which thoughts behave quite a bit like perceptions, which is unsurprising since they are both examples of operations clusters of neurons can perform, which is a relatively narrow class of operations. Video games behave quite a bit like spreadsheets in a similar way.
Of course, there are also many ways in which video games behave nothing at all like spreadsheets, and thoughts behave nothing like perceptions.
Naively speaking, if Alice can think a thought, she can just tell Bob, and he will. Dogs can’t tell us what ultrasounds sound like, but that’s for the same reason they can’t tell us what regular sounds sound like.
That’s assuming the thought can be expressed in language.
Even if we posit that for every pair of humans X,Y if X thinks thought T then Y is capable of thinking T, it doesn’t follow that for all possible Ts, X and Y are capable of thinking T.
That is, whether Alice can think the thought in the first place is not clear.
If you limit yourself to humans, yes. But at least one mind has to be able to think a thought for that thought to exist.
Ah, I thought you were limiting yourself to humans, given your example.
If you’re asserting that for every pair of cognitive systems X,Y (including animals, aliens, sufficiently sophisticated software, etc.) if X thinks thought T then Y is capable of thinking T, then we just disagree.
Yes, transmission of thoughts between sufficiently different minds breaks down, so we recover the possibility of thoughts that can be thought but not by us. But that’s a sufficiently different reason from why there are sensations we can’t perceive to show that the analogy is very shallow.
It would surprise me, since no one could ever give me an example. I’m not sure what kind of evidence could give me good reason to think that there are thoughts that I cannot think.
Try visualizing four spacial dimensions.
Just visualize n dimensions, and then set n = 4.
You might as well tell me to ‘just’ grow wings and fly away...
I believe wnoise was making a joke—one that I thought was moderately funny.
I thought it might be, and if I’d read it elsewhere, I’d have been sure of it—but this is LessWrong, which is chock-full of hyperintelligent people whose abilities to do math, reason and visualize are close to superpowers from where I am. You people seriously intimidate me, you know. (Just because I feel you’re so much out of my league, not for any other reason.)
It’s a standard joke about mathematicians vs everybody else, and I intended it as such. I can do limited visualization in the 4th dimension (hypercubes and 5-cells (hypertetrahedra), not something as complicated as the 120-cell or even the 24-cell), but it’s by extending from a 3-d visualization with math knowledge, rather than specializing n to 4.
For what it’s worth, my ability to reason is fairly good in a very specific way—sometimes I see the relevant thing quickly (and after LWers have been chewing on a problem and haven’t seen it (sorry, no examples handy, I just remember the process)), but I’m not good at long chains of reasoning. Math and visualizing aren’t my strong points.
Been there, done that. Advice to budding spatial-dimension visualizers: the fourth is the hardest, once you manage the fourth the next few are quite easy.
Is this legit and if so can you elaborate? I bet I’m not the only one here who has tried and failed.
Well, I can elaborate, but I’m not sure how helpful it will be. “No one can be told what the matrix is” and that sort of thing. The basic idea is that it’s the equivalent of the line rising out of the paper in two-dimensions, but in three dimensions instead. But that’s not telling someone who has tried and failed anything they don’t know, I’m sure.
If you really want to be able to visualize higher-order spaces, my advice would be to work with them, do math and computer programming in higher-order spaces, and use that to build up physical intuitions of how things work in higher-order spaces. Once you have the physical intuitions it’s easier for your brain to map them to something meaningful. Of course if your reason for wanting to be able to visualize 4D-space is because you want to use the visualization to give you physical intuitions about it that will be useful in math or computer programming, this is an ass-backward way of approaching the problem.
Is it like having a complete n-dimensional construct in your head that you can view in its entirety?
I can visualise 4-dimensional polyhedra, in much the same way I can draw non-planar graphs on a sheet of paper, but it’s not what I imagine being able to visualise higher-dimensional objects to be like.
I used to be into Rubik’s Cube, and it’s quite easy for me to visualise all six faces of a 3D cube at once, but when visualising, say, a 4-octahedron, the graph is easy to visualise, (or draw on a piece of paper, for that matter), but I can only “see” one perspective of the convex hull at a time, with the rest of it abstracted away.
Even better—play Snake in four spatial dimensions!
When I was 13 or so, my brains worked significantly better than they currently do, and I figured out an easy trick for that in a math class one day. Just assign a greyscale color value (from black to white) to each point! This is exactly like taking an usual map and coloring the hills a lighter shade and the low places a darker one.
The only problem with that is it’s still “3.5D”, like the “2.5D” graphics engine of Doom, where there’s only one Z-value to any point in the world so things can’t be exactly above or below each other.
To overcome this, you could theoretically imagine the 3D structure alternating between “levels” in the 4th dimension every second, so e.g. one second a 3D cube’s left half is grey and its right half is white, indicating a surface “rising” in the 4th dimension, but every other second the right half changes to black while the left is still grey, showing a second surface which begins at the same place and “descends” in the 4th dimension. Voila, you have two 3D “surfaces” meeting at a 4D angle!
With RGB color instead of greyscale, one could theoretically visualize 6 dimensions in such a way.
Now, if only this let you rotate things through the 4th dimension.
Doing specific rotations by breaking it into steps is possible. Rotations by 90 degrees through the higher dimensions is doable with some effort—it’s just coordinate swapping after all. You can make checks that you got it right. Once you have this mastered, you can compose it with rotations that don’t touch the higher dimensions. Then compose again with one of these 90 degree rotations, and you have an effective rotation through the higher dimensions.
(Understanding the commutation relations for rotation helps in this breakdown, of course. If you can then go on to understanding how the infinitesimal rotations work, you’ve got the whole thing down.)
I knew a guy who credibly claimed to be able to visualize 5 spacial dimensions. He is a genius math professor with ‘autistic savant’ tendencies.
I certainly couldn’t pull it off and I suspect that at my age it is too late for me to be trained without artificial hardware changes.
The way I would do it for dimensions between d=4 and d=6 is to visualize a (d-3)-dimensional array of cubes. Then you remember that similarly positioned points, in the interior of cubes that are neighbors in the array, are near-neighbors in the extra dimensions (which correspond to the directions of the array). It’s not a genuinely six-dimensional visualization, but it’s a three-dimensional visualization onto which you can map six-dimensional properties. Then if you make an effort, you could learn how rotations, etc, map onto transformations of objects in the visualization. I would think that all claimed visualizations of four or more dimensions really amount to some comparable combinatorial scheme, backed up with some nonvisual rules of transformation and interpretation.
ETA: I see similar ideas in this subthread.
Am I allowed to use time/change dimensions? Because if so, the task is trivial (if computationally expensive).
Ok, now add a temporal dimension.
Adding multiple temporal dimensions effectively how I do it, so one more shouldn’t be a problem*. I visualize a 3 dimensional object in an space with a reference point that can move in n perpendicular directions. As the point of reference moves through the space, the object’s shape and size change.
Example: to visualize a 5-dimensional sphere, I first visualize a 3 dimensional sphere that can move along a 1 dimensional line. As the point of reference reaches the three-dimensional sphere, a point appears, and this point grows into a full sized sphere at the middle, then shrinks back down to a point. I then add another degree of freedom perpendicular to the first line, and repeat the procedure.
Rotations are still very hard for me to do, and become increasingly difficult with 5 or more dimensions. I think this is due to a very limited amount of short-term memory. As for my technique, I think it piggybacks on the ability to imagine multiple timelines simultaneously. So, alas, it’s a matter of repurposing existing abilities, not constructing entirely new ones.
*up to 7: 3 of space, 3 of observer-space, and 1 of time
Either I can visualize them, and then they’re thoughts I can think, or I can’t visualize them, in which case the exercise doesn’t help me.
If you can, replace 4 with N for sufficiently large N.
If you can’t, imagine a creature that evolved in a 4-dimensional universe. I find it unlikely that it would not be able to visualize 4 dimensions.
There’s a pretty serious gap between the idea of a person evolved to visualize four dimensions and it being capable of thoughts I cannot think. This might be defensible, but if so only in the context of certain thoughts, something like qualitative ones. But the original quote was inferring from the fact that not everyone can see all the colors to the idea that there are thoughts we cannot think. If ‘colors I can’t see’ are the only kinds of things we can defend as thoughts that I cannot think, then the original quote is trivial.
So even if you can defend 4d visualizations as thoughts I cannot think, you’d have to extend your argument to something else.
But I have a question in return: how would the belief that there are thoughts you cannot think modify your anticipations? What would that look like?
By itself? Not much at all. The fun part is encountering another creature which can think those thoughts, then deducing the ability (and, being human, shortly thereafter finding some way to exploit it for personal gain) without being able to replicate the thoughts themselves.
Hinton cubes. I haven’t tried them though.
ETA: Original source, online.
The existence of other signals your brain simply doesn’t process doesn’t shift your prior at all?
That doesn’t seem strictly relevant. Other signals might lead me to believe that there are thoughts I don’t think (but I accepted that already), not thoughts I can’t think. How could I recognize such a thing as a thought? After all, while every thought is a brain signal, not every brain signal is a thought: animals have lots of brain signals, but no thoughts.
What is the difference between a thought you can’t think and one you don’t think?
Well, for example I don’t think very much about soccer. There are thoughts about who the best soccer team is that I simply don’t ever think. But I can think them.
Another case: In two different senses of ‘can’, I can and can’t understand Spanish. I can’t understand it at the moment, but nevertheless Spanish sentences are in principle translatable into sentences I can understand. I also can’t read Aztec hieroglyphs, and here the problem is more serious: no one knows how to read them. But nevertheless, insofar as we assume they are a form of language, we assume that we could translate them given the proper resources. To see something as translatable just is to see it as a language, and to see something as a language is to see it as translatable. Anything which was is in principle untranslatable just isn’t recognizable as a language.
I think the point is analogous (and that’s no accident) with thoughts. Any thought that I couldn’t think by any means is something I cannot by any means recognize as a thought in the first place. All this is just a way of saying that the belief that there are thoughts you cannot think is one of those beliefs that could never modify your anticipations. That should be enough to discount it as a serious consideration.
And yet, if I see two nonhuman life forms A1 and A2, both of which are performing something I classify as the same task but doing it differently, and A1 and A2 interact, after which they perform the task the same way, I would likely infer that thoughts had been exchanged between them, but I wouldn’t be confident that the thoughts which had been exchanged were thoughts that could be translated to a form that I could understand.
Alternative explanations include:
They exchanged genetic material, like bacteria, or outright code, like computer programs; which made them behave more similarly.
They are programs, one attacked the other, killed it and replaced its computational slot with a copy of itself.
A1 gave A2 a copy of its black-box decision maker which both now use to determine their behavior in this situation. However, neither of them understands the black box’s decision algorithm on the level of their own conscious thoughts; and the black box itself is not sentient or alive and has no thoughts.
One of them observed the other was more efficient and is now emulating its behavior, but they didn’t talk about it (“exchange thoughts”), just looked at one another.
These are, of course, not exhaustive.
You could call some these cases a kind of thought. Maybe to self-modifying programs, a blackbox executable algorithm counts as a thought; or maybe to beings who use the same information storage for genes and minds, lateral gene transfer counts as a thought.
But this is really just a matter of defining what the word “thought” may refer to. I can define it to include executable undocumented Turing Machines, which I don’t think humans like us can “think”. Or you could define it as something that, after careful argument, reduces to “whatever humans can think and no more”.
Sure. Leaving aside what we properly attach the label “thought” to, the thing I’m talking about in this context is roughly speaking the executed computations that motivate behavior. In that sense I would accept many of these options as examples of the thing I was talking about, although option 2 in particular is primarily something else and thus somewhat misleading to talk about that way.
I think you’re accepting and then withdrawing a premise here: you’ve identified them as interacting, and you’ve identified their interaction as being about the task at hand, and the ways of doing it, and the relative advantages of these ways. You’ve already done a lot of translation right there. So the set up of your problem assumes not only that you can translate their language, but that you in some part already have. All that’s left, translation wise, is a question of precision.
Sure, to some level of precision, I agree that I can think any thought that any other cognitive system, however alien, can think. There might be a mind so alien that the closest analogue to its thought process while contemplating some event that I can fathom is “Look at that, it’s really interesting in some way,” but I’ll accept that this in some part a translation and “all that’s left” is a question of precision.
But if you mean to suggest by that that what’s left is somehow negligible, I strenuously disagree. Precision matters. If my dog and I are both contemplating a ball, and I am calculating the ratio between its volume and surface, and my dog is wondering whether I’ll throw it, we are on some level thinking the same thought (“Oh, look, a ball, it’s interesting in some way”) but to say that my dog therefore can understand what I’m thinking is so misleading as to be simply false.
I consider it possible for cognitive systems to exist that have the same relationship to my mind in some event that my mind has to my dog’s mind in that example.
Well, I don’t think I even implied that the dog could understand what you’re thinking. I don’t think dogs can think at all. What I’m claiming is that for anything that can think (and thus entertain the idea of thoughts that cannot be thought), there are no thoughts that cannot be thought. The difference between you and your dog isn’t just one of raw processing power. It’s easy to imagine a vastly more powerful processor than a human brain that is nevertheless incapable of thought (I think Yud.’s suggestion for an FAI is such a being, given that he’s explicit that it would not rise to the level of being a mechanical person).
Once we agree that it’s a point about precision, I would just say that this ground can always in principle be covered. Suppose the translation has gotten started, such that there is some set of thoughts at some level of precision that is translatable, call it A, and the terra incognito that remains, call it B. Given that the cognitive system you’re trying to translate can itself translate between A and B (the aliens understand themselves perfectly), there should be nothing barring you from doing so as well.
You might need extremely complex formulations of the material in A to capture anything in B, but this is allowed: we need some complex sentence to capture what the Germans mean by ‘schadenfreude’, but it would be wrong to think that because we don’t have a single term which corresponds exactly, that we cannot translate or understand the term to just the same precision the Germans do.
I accept that you don’t consider dogs to have cognitive systems capable of having thoughts. I disagree. I suspect we don’t disagree on the cognitive capabilities of dogs, but rather on what the label “thought” properly refers to.
Perhaps we would do better to avoid the word “thought” altogether in this discussion in order to sidestep that communications failure. That said, I’m not exactly sure how to do that without getting really clunky, really fast. I’ll give it a shot, though.
I certainly agree with you that if cognitive system B (for example, the mind of a Geman speaker) has a simple lexical item Lb (for example, the word “schadenfreude”) ,
...and Lb is related to some cognitive state Slb (for example, the thought /schadenfreude/) such that Slb = M(Lb) (which we ordinarily colloquially express by saying that a word means some specific thought),
...and cognitive system A (for example, the mind of an English speaker) lacks a simple lexical item La such that Slb=M(La) (for example, the state we’d ordinarily express by saying that English doesn’t have a word for “schadenfreude”)...
that we CANNOT conclude from this that A can’t enter Slb, nor that there exists no Sla such that A can enter Sla and the difference between Sla and Slb is < N, where N is the threshold below which we’d be comfortable saying that Sla and Slb are “the same thought” despite incidental differences which may exist.
So far, so good, I think. This is essentially the same claim you made above about the fact that there is no English word analogous to “schadenfreude” not preventing an English speaker from thinking the thought /schadenfreude/.
In those terms, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter Sa. Further, I assert that there can exist a state Sa such that A can enter Sa but B cannot enter any state Sb such that the difference between Sa and Sb is < N.
Do you disagree with that? Or do you simply assert that if so, Sa and Sb aren’t thoughts? Or something else?
I agree that this is an issue of what ‘thoughts’ are, though I’m not sure it’s productive to side step the term, since if there’s an interesting point to be found in the OP, it’s one which involves claims about what a thought is.
I’d like to disagree with that unqualifiedly, but I don’t think I have the grounds to do so, so my disagreement is a qualified one. I would say that there is no state Sa such that A can enter Sa, and such that B cannot enter Sa, and such that B can recognise Sa as a cognitive state. So without the last ‘and such that’, this would be a metaphysical claim that all cognitive systems are capable of entertaining all thoughts, barring uninteresting accidental interference (such as a lack of memory capacity, a lack of sufficient lifespan, etc.). I think this is true, but alas.
With the qualification that ‘B would not be able to recognise Sa as a cognitive state’, this is a more modest epistemic claim, one which amounts to the claim that recognising something as a cognitive state is nothing other than entering that state to one degree of precision or another. This effectively marks out my opinion on your second assertion: for any Sa and any Sb, such that the difference between Sa and Sb cannot be < N, A (and/or B) cannot by any means recognise the difference as part of that cognitive state.
All this is a way of saying that you could never have reason to think that there are thoughts that you cannot think. Nothing could give you evidence for this, so it’s effectively a metaphysical speculation. Not only is evidence for such thoughts impossible, but evidence for the possibility of such thoughts is impossible.
I’m not exactly sure what it means to recognize something as a cognitive state, but I do assert that there can exist a state Sa such that A can enter Sa, and such that B cannot enter Sa, and such that B can believe that A is entering into a particular cognitive state whenever (and only when) A enters Sa. That ought to be equivalent, yes?
This seems to lead me back to your earlier assertion that if there’s some shared “thought” at a very abstract level I and an alien mind can be said to share, then the remaining “terra incognito” between that and sharing the “thought” at a detailed level is necessarily something I can traverse.
I just don’t see any reason to expect that to be true. I am as bewildered by that claim as if you had said to me that if there’s some shared object that I and an alien can both perceive, then I can necessarily share the alien’s perceptions. My response to that claim would be “No, not necessarily; if the alien’s perceptions depend on sense organs or cognitive structures that i don’t possess, for example, then I may not be able to share those perceptions even if I;n perceiving the same object.” Similarly, my response to your claim is “No, not necessarily, if the alien’s ‘thought’ depends on cognitive structures that i don’t possess, for example, then I may not be able to share that ‘thought’.”
You suggest that because the aliens can understand one another’s thoughts, it follows that I can understand the alien’s thoughts, and I don’t see how that’s true either.
So, I dunno… I’m pretty stumped here. From my perspective you’re simply asserting the impossibility, and I cannot see how you arrive at that assertion.
Well, if the terra incogntio has any relationship at all to the thoughts you do understand, such that the terra could be recognized as a part of or related to a cognitive state, then the terra is going to consist in stuff which bears inferential relations to what you do understand. These are relations you can necessarily traverse if the alien can traverse them. Add to that the fact that you’ve already assumed that the aliens largely share your world, that their beliefs are largely true, and that they are largely rational, and it becomes hard to see how you could justify the assertion at the top of your last post.
And that assertion has, thus far, gone undefended.
Well, I justify it by virtue of believing that my brain isn’t some kind of abstract general-purpose thought-having or inferential-relationship-traversing device; it is a specific bit of machinery that evolved to perform specific functions in a particular environment, just like my digestive system, and I find it no more plausible that I can necessarily traverse an inferential relationship that an alien mind can traverse than that I can necessarily extract nutrients from a food source that an alien digestive system can digest.
How do you justify your assertion that I can necessarily traverse an inferential relationship if an alien mind is capable of traversing it?
Well, your brain isn’t that, but its only a necessary but insufficient condition on your having thoughts. Understanding a language is both necessary and sufficient and a language actually is the device you describe. Your competance with your own language ensures the possibility of your traversal in another.
Sorry, I didn’t follow that at all.
The source of your doubt seemed to be that you didn’t think you posessed a general purpose thought having and inferential relationship traversing device. A brain is not such a device, we agree. But you do have such a device. A language is a general purpose thought having and inferential relationship traversing device, and you have that too. So, doubt dispelled?
Ah! OK, your comment now makes sense to me. Thanks.
Agreed that my not believing that my brain is a general-purpose inferential relationship traversing device (hereafter gpirtd) is at the root of my not believing that all thoughts thinkable by any brain are thinkable by mine.
I’m glad we agree that my brain is not a gpirtd.
But you seem to be asserting that English (for example) is a gpirtd.
Can you expand on your reasons for believing that? I can see no justification for that claim, either.
But I do agree that if English were a gpirtd while my brain was not, it would follow that I could infer in English any thought that an alien mind could infer, at the same level of detail that the alien mind could think it, even if my brain was incapable of performing that inference.
So the claim is really that language is a gpirtd, excepting very defective cases (like sign-language or something). That language is an inference relation traversing device is, I think, pretty clear on the surface of things: logic is that in virtue of which we traverse inference relations (if anything is). This isn’t to say that English, or any language, is a system of logic, but only that logic is one of the things language allows us to do.
I think it actually follows from this that language is also a general purpose thought having device: thoughts are related, and their content is in large part (or perhaps entirely) constituted, by inferential relations. If we’re foundationalists about knowledge, then we think that the content of thoughts is not entirely constituted by inferential relations, but this isn’t a serious problem. If we can get anywhere in a process of translation, it is by assuming we share a world with whatever speaker we’re trying to understand. If we don’t assume this, and to whatever extent we don’t assume this, just to that extent we can’t recognize the gap as conceptual or cognitive. If an alien was reacting in part to facts of the shared world, and in part to facts of an unshared world (whatever that means), then just to the extent that the alien is acting on the latter facts, to that extent would we have to conclude that they are behaving irrationally. The reasons are invisible to us, after all. If we manage to infer from their behavior that they are acting on reasons we don’t have immediate access to, then just to the extent that we now view their behavior as rational, we now share that part of the world with them. We can’t decide that behavior is rational while knowing nothing of the action or the content of the reason, in the same sense that we can’t decide whether or not a belief is rational, or true, while knowing nothing of its meaning or the facts it aimes at.
This last claim is most persuasively argued, I think, by showing that any example we might construct is going to fall apart. So it’s here that I want to re-ask my question: what would a thought that we cannot think even look like to us? My claim isn’t that there aren’t any such thoughts, only that we could never be given reason for thinking that there are.
ETA: as to the question of brains, here I think there is a sense in which there could be thoughts we cannot think. For example, thoughts which take more than a lifetime to think. But this isn’t an interesting case, and it’s fundamentally remediable. Imagine someone said that there were languages that are impossible for me to understand, and when I pressed him on what he meant, he just pointed out that I do not presently understand chinese, and that he’s about to kill me. He isn’t making an interesting point, or one anyone would object to. If that is all the original quote intended, then seems a bit trivial: the quoted person could have just pointed out that 1000 years ago, no one could have had any thoughts about airplanes.
Re: your ETA… agreed that there are thoughts I cannot think in the trivial sense you describe here, where the world is such that the events that would trigger that thought never arise before my death. What is at issue here is not that, but the less trivial claim that there are thoughts I cannot think by virtue of the way my mind works. To repeat my earlier proposed formalization: there can exist a state Sa such that mind A can enter Sa but mind B cannot enter Sa.
But you seem to also want to declare as trivial all cases where the reason B cannot enter Sa is because of some physical limitation of B, and I have more trouble with that.
I mean, sure, if A can enter Sa in response to some input and B cannot, I expect there to be some physical difference between A and B that accounts for this, and therefore some physical modification that can be made to B to remedy this. So sure, I agree that all such cases are “fundamentally remediable”. Worst-case, I transform B into an exact replica of A, and now B can enter state Sa, QED.
I’m enough of a materialist about minds to consider this possible in principle. But I would not agree that, because of this, the difference between A and B is trivial.
Well, at the risk of repeating myself in turn, I’ll go back to my original example. As an observer I would have reason to believe there were some thoughts involved in that exchange, even if I couldn’t think those thoughts.
I understand you to be saying in response that I can necessarily think those thoughts, since I can understand them at some level L1 by virtue of having an awareness of the same world A1 and A2 are interacting with (I agree so far) and that I can therefore understand them at any desired level L2 as long as the aliens themselves can traverse an inference relation between L1 and L2 because I have a language, and languages* are gpirtds (I disagree).
I’ve asked you why you believe English (for example) is a gpirtd, and you seem to have responded that English (like any non-defective language) allows us to do logic, and logic allows us to traverse inference relations. Did I understand that correctly?
If so, I don’t think your response is responsive. I would certainly agree that English (like any language) allows me to perform certain logical operations and therefore to traverse certain inference relations. I would not agree that for all inference relations R, English (or any other language) allows me to traverse R.
I agree that if I’m wrong about that and English (for example) really does allow me to traverse all inference relations, then the rest of your argument holds.
I see no reason to believe that, though.
===
Except, you say, for defective cases like sign-language. I have absolutely no idea on what basis you judge sign language defective and English non-defective here, or whether you’re referring to some specific sign language or the whole class of sign languages. However, I agree with you that sign languages are not gpirtds. (I don’t believe English is either.)
Well, I’d like a little more from you: I’d like an example where you are given reason to think that there are thoughts in the air, and reason to think that they are not thoughts you could think. As it stands, I of course have no objection to your example, because the example doesn’t go so far as suggesting the latter of the two claims.
So do you think you can come up with such an example? If not, don’t you think that counts powerfully against your reasons for thinking that such a situation is possible?
This is not exactly related to my claim. My claim is that you could never be given a reason for thinking that there are thoughts you cannot think. That is not the same as saying that there are thoughts you cannot think. So likewise, I would claim that you could never, deploying the inference relations available to you, infer that there are inference relations unavailable to you. Because if you can infer that they are inference relations, then they are available to you. (ETA: the point here, again, is that you cannot know that something is an inference relation while not knowing of what kind of relation it is. Recognizing that something is an inference relation just is recognizing that it is truth-preserving (say), and you could only recognize that by having a grip on the relation that it is.)
It’s extremely important to my argument that we keep in full view the fact that I am making an epistemic claim, not a metaphysical one.
From an epistemic position, the proposition P1: “Dave’s mind is capable of thinking the thought that A1 and A2 shared” is experimentally unfalsifiable. No matter how many times, or how many different ways, I try to think that thought and fail, that doesn’t prove I’m incapable of it, it just means that I haven’t yet succeeded.
But each such experiment provides additional evidence against P1. The more times I try and fail, and the more different ways I try and fail, the greater the evidence, and consequently the lower the prior probability of P1.
If you’re simply asserting that that prior probability can’t ever reach zero, I agree completely.
If you’re asserting that that prior probability can’t in practice ever reach epsilon, I mostly agree.
If you’re asserting that that prior probability can’t in practice get lower than, say, .01, I disagree.
(ETA: In case this isn’t clear, I mean here to propose “I repeatedly try to understand in detail the thought underlying A1 and A2′s cooperation and I repeatedly fail” as an example of a reason to think that the thought in question is not one I can think.)
I think that overestimates my claim: suppose Dave were a propositional logic machine, and the A’s were first order logic machines. If we were observing Dave and the Aliens, and given that we are capable of thinking more expressively than either of them, then we could have reason for thinking that Dave cannot think the thoughts that the Aliens are thinking (lets just assume everyone involved is thinking). So we can prove P1 to be false in virtue of stuff we know about Dave and stuff we know about what the Aliens are saying.
That, again, is not my point. My point is that Dave could never have reasons for thinking that he couldn’t think what the Aliens are thinking, because Dave could never have reasons for thinking both A) that the aliens are in a given case doing some thinking, and B) that this thinking is thinking that Dave cannot do. If B is true, A is not something Dave can have reasons for. If Dave can have reason for thinking A, then B is false.
So suppose Dave has understood that the aliens are thinking. By understanding this, Dave has already and necessarily assumed that he and the aliens share a world, that he and the aliens largely share relevant beliefs about the world, and that he and the aliens are largely rational.
If you agree that one cannot have reason to think that an action or belief is rational or true without knowing the content or intention of the belief or action, then I think you ought to agree that whatever reasons Dave has for thinking that the aliens are rational are already reasons for thinking that Dave can understand them.
And to whatever extent we third party observers can see that Dave cannot understand them, just to that extent Dave cannot have reasons for thinking that the aliens are rational. In such a case, Dave may believe that the aliens are thinking and it might be impossible for him to understand them. But in this case Dave’s opinion that the aliens are thinking is irrational, even if it is true.
Thus, no one can ever be given any reason (i.e. there can never be any evidence) for thinking that there are thoughts that they cannot think. We can never know that there are no such thoughts either, I suppose.
Supposing both that all of those suppositions were true, and that we could somehow determine experimentally that they were true, then, yes, it would follow that the conclusion was provable.
I’m not sure how we would determine experimentally that they were true, though. I wouldn’t normally care, but you made such a point a moment ago about the importance of your claim being about what’s knowable rather than about what’s true that I’m not sure how to take your current willingness to bounce back and forth between that claim about what can be known in practice, and these arguments that depend on unknowable-in-practice presumptions.
Then I suppose we can safely ignore it for now.
As I’ve already said, in this example I have reason to believe A1 and A2 are doing some thinking, and if I make a variety of good-faith-but-unsuccessful attempts to recapitulate that thinking I have reason to believe I’m incapable of doing so.
Is it sufficient to suppose that Dave has reasons to believe the aliens are thinking?
I’m willing to posit all of those things, and I can imagine how they might follow from a belief that the aliens are thinking, for sufficiently convenient values of “world”, “largely”, and “relevant”. Before I lean too heavily on any of that I’d want to clarify those words further, but I’m not sure it actually matters.
I don’t agree with this. Just to pick a trivial example, if you write down a belief B on a slip of paper and hand it to my friend Sam, who I trust to be both a good judge of and an honest reporter of truth, and Sam says to me “B is true,” I have reason to think B is true but I don’t know the content of B.
The premise is false, but I agree that were it true your conclusion would follow.
This seems to be a crucial disagreement, so we should settle it first. In your example, you said that you trust Sam to be a good judge and an honest reporter of truth. This means, among other things, that you and Sam share a great many beliefs, and that you think Sam makes judgements roughly in the same ways you do.
So, you mostly understand the kinds of inferences Sam draws, and you mostly understand the beliefs that Sam has. If you infer from this that B is true because Sam says that it is, you must be assuming that B isn’t so odd belief that Sam has no competence in assessing it. It must be something Sam is familiar enough to be comfortable with. All that said, you’ve got a lot of beliefs about what B is, without knowing the specifics.
Essentially, your inference that B is true because Sam says that it is, is the belief that though you don’t know what B says specifically, B is very likely to either be one of your beliefs already or something that follows straightforwardly from some of your beliefs.
In other words, if you have good reason to think B is true, you immediately good reason to think you know something about the content of B (i.e. that it is or follows from one of your own beliefs). Thinking that B is probably true just is believing you know something about B.
(ETA: I want to add how closely this example resembles your aliens example, both in the set up, and in how (I think) it should be answered. In both cases, we can look at the example more closely and discover that in drawing the conclusion that the aliens are thinking or that B is true, a great deal is assumed. I’m saying that you can either have these assumptions, but then my translation point follows, or you can deny the translation point, but then you can’t have the assumptions necessary to set up your examples.)
All right.
Sure, if Sam and I freely interact and I consider him a good judge and honest reporter of truth, I will over time come to believe many of the things Sam believes.
Also, to the extent that I also consider myself a good judge of truth (which has to be nontrivial for me to trust my judgment of Sam in the first place), many of the beliefs I come to on observing the world will also be beliefs Sam comes to on observing the world, even if we don’t interact freely enough for him to convince me of his belief. This is a little trickier, because not all reasons for belief are fungible… I might have reasons for believing myself a good judge of whether Sam is a good judge of truth without having reasons for believing myself a good judge of truth more generally. But I’m willing to go along with it for now.
Agreed so far.
No, I don’t follow this at all. I might think Sam comes to the same conclusions that I would given the same data, but it does not follow in the least that he uses the same process to get there. That said, I’m not sure this matters to your argument.
Yes, both in the sense that I can mostly predict the inferences Sam will draw from given data, and in the sense that any arbitrarily-selected inference that Sam draws is very likely to be one that I can draw myself.
Yes, in the same ways.
Something like this, yes. It is implicit in this example that I trust Sam to recognize if B is outside his competence to evaluate and report that fact if true, so it follows from his not having reported that that I’m confident it isn’t true.
Certainly. In addition to all of that stuff, I also have the belief that B can be written down on a slip of paper, with all that that implies.
Statistically speaking, yes: given an arbitrarily selected B1 for which Sam would report “B1 is true,” the prior probability that I already know B1 is high.
But this is of course in no sense guaranteed. For example, B might be “I’m wearing purple socks,” in response to which Sam checks the color of your socks, and subsequently reports to me that B is true. In this case I don’t in fact know what color socks you are wearing.
Again, statistically speaking, sure.
No. You are jumping from “X is reliable evidence of Y” to “X just is Y” without justification.
If X smells good, I have reason to believe that X tastes good, because most things that smell good also taste good. But it is quite possible for me to both smell and taste X and conclude “X smells good and tastes bad.” If “thinking that X smells good just is believing that X tastes good” were true, I would at that point also believe “X tastes good and tastes bad,” which is not in fact what happens. Therefore I conclude that “thinking that X smells good just is believing that X tastes good” is false.
Similarly, if Sam reports B as true, I have good reason to think B is probably true, and I also have good reason to think I know something important about the content of B (e.g., that it is or follows from one of my own beliefs), because most things that Sam would report as true I also know something important about the contents of (e.g., ibid). But it’s quite possible for Sam to report B as true without me knowing anything important about the content of B. I similarly conclude that “thinking that B is probably true just is believing [I] know something [important] about B” is false.
In case it matters, not only is it possible for me to believe B is true when I don’t in fact know the content of B (e.g., B is “Abrooks’ socks are purple” and Sam checks your socks and tells me “B is true” when I neither know what B says nor know that Abrooks’ socks are purple), it’s also possible for me to have good reason to believe that I don’t know the content of B in this situation (e.g., if Sam further tells me “Dave, you don’t know the content of B”… which in fact I don’t, and Sam has good reason to believe I don’t.)
You know that B is likely to be one of your beliefs, or something that follows straightforwardly from your beliefs. It makes no difference if B actually turns out not to be one of your beliefs or something that follows straightforwadly therefrom. Likewise, you would have good reason to guess that the outcome of a die roll is 1-5 as opposed to 6. If it turns out that it comes up 6, this does not impugn the probability involved in your initial estimate. Knowing how dice work is knowing something about this die roll and its outcome. By knowing how dice work, you know that the outcome of this roll is probably 1-5, even if it happens to be 6. Knowing how Sam’s judgements work is knowing something about this judgement.
None of this, I grant you, involves knowing the specific content of B. But all of this is knowledge about the content of B. If Sam said to you “Dave, you don’t know the content of B”, you ought to reply “Sam, I know enough about your beliefs and judgements that I really do know something about the content of B, namely that it’s something you would judge to be true on the basis of a shared set of beliefs.”
Your set up, I think, draws an arbitrary distinction between knowledge of the specific content of B and knowledge of B as a member of someone’s set of beliefs. Even if there’s any distinction here (i.e. if we’re foundationalists of some kind), it still doesn’t follow that knowledge of the second kind is wholly unrelated to knowledge of the first. In fact, that would be astonishing.
So, I’m not saying that because you have reason to believe B to be true, you therefore have reason to believe that you know the content of B. What I’m saying is that because you have reason to believe B to be true, you therefore do know something about the content of B.
I hope we can agree that in common usage, it’s unproblematic for me to say that I don’t know what color your socks are. I don’t, in fact, know what color your socks are. I don’t even know that you’re wearing socks.
But, sure, I think it’s more probable that your socks (if you’re wearing them) are white than that they’re purple, and that they probably aren’t transparent, and that they probably aren’t pink. I agree that I know something about the color of your socks, despite not knowing the color of your socks.
And, sure, if you’re thinking “my socks are purple” and I’m thinking “Abrooks’ socks probably aren’t transparent,” these kinds of knowledge aren’t wholly unrelated to one another. But that doesn’t mean that either my brain or my command of the English language is capable of traversing the relationship from one to the other.
Much as you think I’m drawing arbitrary distinctions, I think you’re eliding over real distinctions.
Okay, so it sounds like we’re agreed that your reasons for believing B are at the same time things you take yourself to know about the content of B. Would you accept that this is always going to be true? Or can you think of a counterexample?
If this is always true, then we should at least take this in support of my more general claim that you cannot have reason to think that something is rational or true, i.e. that something is thinking, without taking yourself to know something about the content of that thought.
If we’re on the same page so far, then we’ve agreed that you can’t recognise something as thought without assuming you can understand something about its content. Now the question remains, can you understand something to be a thought or part of a thought while at the same time having reason to think it is fundamentally unintelligible to you? Or does the very recognition of something as a thought immediately give you reason to think you can understand it, while evidence against your understanding justifies you only in concluding that something isnt thought after all?
Yes, my reasons for believing B are, in the very limited sense we’re now talking about, things I know about the content of B (e.g., that the value of a die roll is probably between 1 and 5).
Yes, agreed that if I think something is thinking, I know something about the content of its thought.
Further agreed that in the highly extended sense that you’re using “understanding”—the same sense that I can be said to “know” what color socks you’re wearing—I understand everything that can be understood by every thinking system, and my inability to understand a thing is evidence against it’s being a thought.
So, OK… you’ve proven your point.
I continue to think that by insisting on that extended sense of the word, you are eliding over some important distinctions. But I appreciate that you consider those distinctions arbitrary, which is why you insist on ignoring them.
Oh, come on, this has been a very interesting discussion. And I don’t take myself to have proven any sort of point. Basically, if we’ve agreed to all of the above, then we still have to address the original point about precision.
Now, I don’t have a very good argument here, for thinking that you can go from knowing some limited and contextual things about the content of a thought to knowing the content with as much precision as the thinker. But here goes: suppose you have a cooperative and patient alien, and that you yourself are intent on getting the translation right. Also, let’s assume you have a lot of time, and all the resources you could want for pursuing the translation you want. So given an unlimited time, and full use of metaphor, hand gestures, extended and complex explanations in what terms you do manage to get out of the context, corrections of mistakes, etc. etc., I think you could cover any gap so long as you can take the first step. And so long as the thought isn’t actually logically alien.
This means that the failure to translate something should be taken not as evidence that it might be impossible, but as evidence that it is in fact possible to translate. After all, if you know enough to have reason to believe that you’ve failed, you have taken the first few steps already.
As to whether or not logically alien thought, thought which involves inferences of which we are incapable, is possible, I don’t know. I think that if we encountered such thought, we would pretty much only have reason to think that it’s not thought.
So, forget about proving anything. Have I made this plausible? Does it now seem reasonable to you to be surprised (contra the original quote) to hear that there are thoughts we cannot think? If I’ve utterly failed to convince you, after all, I would take that as evidence against my point.
My position on this hasn’t changed, really.
I would summarize your argument as “If we can recognize them as thinking, we are necessarily mutually intelligible in some highly constrained fashion, which makes it likely that we are mutually intelligible in the more general case. Conversely, if we aren’t mutually intelligible in the general case, we can’t recognize them as thinking.”
My objection has been and remains with the embedded assumption that if two systems are mutually intelligible in some highly constrained fashion, it’s likely that they are mutually intelligible in the more general case. On average this might well be true, but the exceptions are important. (Similar things are true when playing Russian roulette. On average it’s perfectly safe, but I wouldn’t recommend playing.)
My reason for objecting remains what it was: evolved systems are constrained by the environment in which they evolved, and are satisficers rather than optimizers, and are therefore highly unlikely to be general-purpose systems. This is as true of cognitive systems as it is of digestive systems. I would be as surprised to hear of an alien mind thinking thoughts I can’t think as I would be to hear of an alien stomach digesting foods I can’t digest—that is, not surprised at all. There’s nothing magic about thought, it’s just another thing we’ve evolved to be able to do.
That said, I would certainly agree that when faced with a system I have reason to believe is thinking, the best strategy for me to adopt is to assume that I can understand its thoughts given enough time and effort, and to make that effort. (Similarly, when faced with a system I have reason to believe needs food, I should assume that I can feed it given enough time and effort, and make that effort.)
But when faced with a system where I have reason to believe is thinking and where all plausible efforts have failed, I am not justified in concluding that it isn’t thinking after all, rather than concluding that its thinking is simply alien to me.
I guess my problem with this claim is similar to my problem with the original quote: the analogy between sensations and thoughts is pretty weak, such that the inference from incompatible sensations to incompatible thoughts is dubious. The analogy between thoughts and digestion is even weaker. The objection that we’re organisms of a certain kind, with certain biological limits, is one which involves taking an extremely general point, and supposing that it bears on this issue in particular. But how? Why think that the biological character of our brain places any relevant limits on what we can think? This might seem like an obvious connection, but I don’t think it can go without support. The connection between neural activity and brain structures on the one hand and thoughts on the other is not so clear that we can just jump from such general observations about the one to specific claims about the other.
So how can we fill out this reasoning?
Yes, it does seem like an obvious connection to me. But, all right...
For example, I observe that various alterations of the brain’s structure (e.g., those caused by brain lesions or traumatic brain injury) create alterations in the kinds of thoughts the brain can think, and I infer from this that the brain’s structure constrains the kinds of thoughts it can think.
And as I said, I consider the common reference class of evolved systems a source of useful information here as well.
Incidentally, didn’t you earlier agree that brains weren’t general-purpose inference-traversing devices? I thought your position was that language filled this role. (Even more incidentally, does it follow from this that two nonlinguistic thinking systems being mutually nonintelligable is more plausible for you? Actually, do you consider nonlinguistic thinking systems possible in the first place?)
I don’t think this is a good inference: it doesn’t follow from the fact that defective brains are constrained in some of their cognitive capacities that for healthy brains there are thoughts that they cannot think (and not for reasons of memory storage, etc.). First, this involves an inference from facts about an unhealthy brain to facts about a healthy brain. Second, this involves an inference from certain kinds of limitations on unhealthy brains to other kinds of limitations on healthy brains. After all, we’ve agreed that we’re not talking about limits on thinking caused by a lack of resources like memory. None of the empirical work showing that brain damage causes cognitive limits is strictly relevant to the question of whether or not other languages are translatable into our own.
This is still my position.
No, I don’t consider that to be possible, though it’s a matter of how broadly we construe ‘thinking’ and ‘language’. But where thinking is the sort of thing that’s involved in truth values and inference relations (the truth predicate is probably not actually necessary), and where language is what we are using to communicate right now, then I would say “there is nothing that thinks that cannot use language, and everything that can use language can to that extent think.”
As I said the last time this came up, I don’t consider the line you want to draw on “for reasons of memory storage, etc” to be both well-defined and justified.
More precisely, I freely grant that if there are two minds A and B such that A can think thought T and B cannot think T, that there is some physical difference D between A and B that causes that functional difference, and whether D is in the category of “memory storage, etc.” is not well defined. If any physical difference counts, then I guess I agree with you: if A can think T and B is physically identical to A, then B can think T as well. But that’s one hell of an additional condition.
It follows that as humans evolved from our most recent non-thinking-and-non-linguistic forebear, the ability to think and the ability to use language appeared within the same individual. Yes?
Well, I take it for granted that you can I can think the same thought (say “It is sunny in Chicago”), even though (I think) we have no reason to believe the neural activity involved in this thinking is identical in our two brains. So the fact that there are physical differences between two thinkers doesn’t immediately mean that they cannot think the same thoughts. I expect you can think all the same thoughts that I think if we were to make a project of it. And yet it is implausible (and as far as I know empirically unsupported) to think that part or all (or even any) of your brain would as a result become structurally identical to mine.
So physical differences can matter, but among healthy brains, they almost always don’t. No two english speakers have structurally identical brains, and yet we’re all fully mutually intelligible.
So we can’t infer from physical differences to cognitive incompatibilities. I asked you in my last post to give me some reasons for the inference from ‘our brains are evolved systems’ to ‘we can have reason to believe that there are thoughts we cannot think’ or ‘there are thoughts we cannot think’. Is there some inferential meat here, or is this a hunch? Have I misrepresented your view?
Yes, I think so, though of course there wasn’t a ‘first thinker/language user’.
This is another place where I want to avoid treating “Y is near enough to X for practical considerations” as equivalent to “Y is X” and then generalizing out from that to areas outside those practical considerations.
I would certainly agree that you and I can think two thoughts Ta and Tb and have them be similar enough to be considered the same thought for practical purposes (the case where both Ta and Tb map to “It is sunny in Chicago” might be an example, depending on just what we mean by that utterance). I would similarly agree that we have no reason to expect, in this case, that either the neural activity involved in this thinking or the biochemical structures that support and constrain that neural activity to be exactly identical.
Sure, but why are you limiting the domain of discourse in this way?
If Tom has a stroke and suffers from aphasia, he is less mutually intelligible with other English speakers than he was before the stroke, and his brain is less relevantly similar to that of other English speakers. As his brain heals and the relevant similarities between his brain and mine increase, our mutual intelligibility also increases.
I certainly agree that if we ignore Tom altogether, we have less reason to believe that structure constrains function when it comes to cognition than if we pay attention to Tom. But I don’t see why ignoring him is justified.
I would say rather that the relevant parts of two English speakers’ brains are very similar, and their mutual intelligibility is high. This is precisely what I would expect from a relationship between relevant structural similarity and mutual intelligibility.
As above, this is equivalent to what you said for practical considerations.
If you don’t consider anything I’ve said on this question over the last twenty comments or so to provide meat for inference, then I think we can classify it as “just a hunch” for purposes of this conversation.
The point isn’t that we should ignore him. The point is that your assumption that the difference between Tom and a healthy brain is relevant to this question is (at least as yet) undefended.
Maybe you could point me to something specific? In reviewing our conversations, I found statements of this inference, but I didn’t find a defense of it. At one point you said you took it to be obvious, but this is the best I could find. Am I just missing something?
I don’t know if you’re missing anything.
I accept that you consider the items on which I base the belief that brain structure constrains the set of inferential relations that an evolved brain can traverse to be inadequate evidence to justify that conclusion. I don’t expect repeating myself to change that. If you genuinely don’t consider them evidence at all, I expect repeating myself to be even less valuable.
I consider it evidence, just weak and indirect in relation to (what I take to be) much stronger and more directly related evidence that we can assume that anything we could recognize as thinking is something we can think. Such that, on balance, I would be surprised to hear that there are such thoughts.
It sounds like we’ve pretty much exhausted ourselves here, so thanks for the discussion.
Can you rotate four dimensional solids in your head?
Edit: it looks like I’m not the first to suggest this, but I’ll add that since computers are capable not just of representing more than three spacial dimensions, but of tracking objects through them, these are probably “possible thoughts” even if no human can represent them mentally.
Well, suppose I’m colorblind from birth. I can’t visualize green. Is this significantly different from the example of 4d rotations?
If so, how? (ETA: after all, we can do all the math associated with 4d rotations, so we’re not deficient in conceptualizing them, just in imagining them. Arguably, computers can’t visualize them either. They just do the math and move on).
If not, then is this the only kind of thought (i.e. visualizations, etc.) that we can defend as potentially unthinkable by us? If this is the only kind of thought thus defensible, then we’ve rendered the original quote trivial: it infers from the fact that it’s possible to be unable to see a color that it’s possible to be unable to think a thought. But if these kinds of visualizations are the only kinds of thoughts we might not be able to think, then the quote isn’t saying anything.
If you discount inaccessible qualia, how about accurately representing the behaviors of subatomic particles in a uranium atom?
I’m not a physicist, but I have been taught that beyond the simplest atoms, the calculations become so difficult that we’re unable to determine whether our quantum models actually predict the configurations we observe. In this case, we can’t simply do the math and move on, because the math is too difficult. With our own mental hardware, it appears that we can neither visualize nor predict the behavior of particles on that scale, above a certain level of complexity, but that doesn’t mean that a jupiter brain wouldn’t be able to.
I’m not discounting qualia (that’s it’s own discussion), I’m just saying that if these are the only kinds of thoughts which we can defend as being potentially unthinkable by us, then the original quote is trivial.
So one strategy you might take to defend thoughts we cannot think is this: thinking is or supervenes on a physical process, and thus it necessarily takes time. All human beings have a finite lifespan. Some thought could be formulated such that the act of thinking it with a human brain would take longer than any possible lifespan, or perhaps just an infinite amount of time. Therefore, there are thoughts we cannot think.
I think this suggestion is basically the same as yours: what prevents us from thinking this thought is some limited resources, like memory or lifespan, or something like that. Similarly, I could suggest a language that is in principle untranslatable, just because all well formed sentences and clauses in that language are long enough that we couldn’t remember a whole one.
But it would be important to distinguish, in these cases, between two different kinds of unthinkability or untranslatability. Both the infinite (or just super complex) thoughts and the super long sentences are translatable into a language we can understand, in principle. There’s nothing about those thoughts or sentences, or our thoughts or sentences, that makes them incompatible. The incompatibility arises from a fact about our biology. So in the same line, we could say that some alien species’ language is untranslatable because they speak and write in some medium we don’t have the technology to access. The problem there isn’t with the language or the act of translation.
In sum, I think that this suggestion (and perhaps the original quote) trades on an equivocation between two different kinds of unthinkability. But if the only defensible kind of unthinkability is one on the basis of some accidental limitation of access or resources, then I can’t see what’s interesting about the idea. It’s no more interesting then than the point that I can’t speak Chinese because I haven’t learned it.
For me, it merely brings it to the level of “interesting speculation”. What observations would provide strong evidence that there be dragons? Other weak evidence that just leaves it at much the original level is the existence of anosognosia—people with brain damage who appear to be unable to think certain thoughts about their affliction. But that doesn’t prove anything about the healthy brain, any more than blindness proves the existence of invisible light.
Some people seem unable to grok mathematics, but then, some people do. The question is whether, Turing-completeness aside, the best current human thinking is understanding-complete, subject only to resource limitation.
So if Majus’s post (on Pinker) is correct, and the underling processing engine(s) (aka “the brain”) determine the boundaries of what you can think about, then it is almost tautological that no one can give you an example since to date almost all folks have a very similar underlying architecture.
So what I argued was that thoughts are by nature commensurable: it’s just in the nature of thoughts that any thinking system can think any thought from any other thinking system. There are exceptions to this, but these exceptions are always on the basis of limited resources, like limited memory.
So, an application of this view is that there are no incommensurable scientific schemes: we can in principle take any claim from any scientific paradigm and understand or test it in any other.
All I argued was that if their thesis is correct, then unless you’ve had some very odd experiences, no one can give you an example because everyone you meet is similarly bounded.
That is the limit of what my statement was intended to convey.
I don’t know enough neurology, psychology and etc. to have a valid opinion, but I will note that we see at most 3 colors. We perceive many more. But any time we want to perceive, for example, the AM radio band we map it into a spectrum our eyes can handle, and as near as I can tell we “think” about it in the colors we perceive.
It is my understanding that there is some work in this area where certain parts of hte brain handle certain types of work. Folks with certain types of injuries or anomalous structures are unable to process certain types of input, and unable to do certain kinds of work. This seems to indicate that while our brain, as currently constructed, is a fairly decent tool for working out the problems we have in front of us, there is some evidence that it is not a general purpose thinking machine.
(in one of those synchronicity thingies my 5 year old just came up to me and showed me a picture of sound waves coming into an ear and molecules “traveling” into your nose).
Barbara Alice Mann
I agree with the necessity of making life more fair, and disagree with the connotational noble Pocahontas lecturing a sadistic western patriarch. (Note: the last three words are taken from the quote.)
Agree that that looks an awful lot like an abuse of the noble savage meme. Barbara Alice Mann appears to be an anthropologist and a Seneca, so that’s at least two points where she should really know better—then again, there’s a long and more than somewhat suspect history of anthropologists using their research to make didactic points about Western society. (Margaret Mead, for example.)
Not sure I entirely agree re: fairness. “Life’s not fair” seems to me to succinctly express the very important point that natural law and the fundamentals of game theory are invariant relative to egalitarian intuitions. This can’t be changed, only worked around, and a response of “so make it fair” seems to dilute that point by implying that any failure of egalitarianism might ideally be traced to some corresponding failure of morality or foresight.
You are confusing “fairness” and egalitarianism. While everyone has their own definition of “fairness”, it feels obvious to me that, even if you’re correct about the cost of imposing reasonable egalitarianism being too high in any given situation, this does not absolve us from seeking some palliative measures to protect those left worst off by that situation. Reducing first the suffering of those who suffer most is an ok partial definition of fairness for me.
Despite (or due to, I’m too sleepy to figure it out) considering myself an egalitarian, I would prefer a world where the most achieving 10% get 200 units of income (and the top 10% of them get 1000), the least achieving 10% get 2 units and everyone else gets 5-15 units (1 unit supporting the lifestyle of today’s European blue-collar worker) to a world where the bottom 10% get 0.2 units and everyone else gets 25-50. Isn’t that more or less the point of charity (aside from signaling)?
I didn’t say this. Actually, I’d consider it somewhat incoherent in the context of my argument: if imposing reasonable egalitarianism (whatever “reasonable” is) was too costly to be sustainable, it seems unlikely that we’d have developed intuitions calling for it.
On the other hand, I suppose one possible scenario where that’d make sense would be if some of the emotional architecture driving our sense of equity evolved in the context of band-level societies, and if that architecture turned out to scale poorly—but that’s rather speculative, somewhat at odds with my sense of history, and in any case irrelevant to the point I was trying to make in the grandparent.
Anyway, don’t read too much into it. My point was about the relationship between the world and its mathematics and our anthropomorphic intuitions; I wasn’t trying to make any sweeping generalizations about our behavior towards each other, except in the rather limited context of game theory and its various cultural consequences. I certainly wasn’t trying to make any prescriptive statements about how charitable we should be.
Some of the local Right are likely to claim that we developed them just for the purpose of signaling, and that they’re the worst thing EVAH when applied to reality. ;)
(Please don’t take this as a political attack, guys, my debate with you is philosophical. I just need a signifier for you.)
ominous theme music
Well someone certainly has been digging into the LessWrong equivalent of Sith holocrons. You are getting pretty good at integrating their mental tool kit. It has made your thinking clearer, made your positions stronger than would have been otherwise possible.
Now far from me, to question such a search for knowledge. Indeed I commend it. It is a path to great predictive power! You will find that as you continue your studies it can offer many useful heuristics, that some would consider … unthinkable.
You know, I was not wholly unprepared for this ideological predicament. Since I first became interested in Fascist-like ideas and the history of political conflict surrounding them (during high school), I’ve always had a hunch that “the enemy” is far wiser, more attractive and more insidious than most people who pretend to “common sense” believe. It is the radical Right themselves and the radical Left who oppose both them and mainstream liberalism (which is “common sense” to our age) that have a more realistic estimate of this conflict’s importance. Even in spite of the fact that said Right has been hounded and suppressed since 1940, including, in a gentler way, by moderate conservatives eager to attain a more enlightened image. To quote again from Orwell’s review of Mein Kampf:
Of course, the above can’t be applied to all such right-wing radicals without adjusting for their personal differences—e.g. Mencius criticizing idealism as the root of all evil both on the right and on the left, while himself possessing a less-than-obvious but very distinct sort of idealism [1] - but still. If exposed to today’s political blogosphere, Orwell could undoubtedly have constructed similar respectful warnings for all his radical opponents he’d find solid. The people who dreaded and obsessed over “Fascism”, and continue to do so to this day—as well as the contrarians who actually walk that path—have clearer vision than the complacent masses. That the idea is in retreat and on the decline does not affect its strict consistency, decent compatibility with human nature and inherent potential.
Still, when all’s said and done I view the situation as half a rational investigation and half a holy war (for a down-to-earth definition of “holy”); I don’t currently feel any erosion in my values or see myself reneging at the end of it. Yet—and thank you for your compliment—I’m certainly eager to familiarize myself with as much of the other side’s intellectual weaponry as it’s possible to without getting significantly Sapir-Whorfed.
-[1] (I’m not going to describe in detail here Moldbug’s many similarities and differences with classical thought that has been called fascist; I’ll only mention that he himself admitted that calling his vision a “fascist technocracy” has “a grain of truth”—and, of course, I’m rather skeptical of his pretensions to exceptional pragmatism and non-mindkilledness)
I think that Robert Smith has a much wiser take on this: “The world is neither fair nor unfair”
The world is neither F nor ~F?
Unfair is the opposite of fair, not the logical complement. The moon is neither happy nor sad.
That is indeed possible if F is incoherent or has no referent. The assertion seems equivalent to “There’s no such thing as fairness”.
I’m confused because it was Eliezer who taught me this.
EDIT: I’m now resisting the temptation to tell Eliezer to “read the sequences”.
Original parent says, “The world is neither fair nor unfair”, meaning, “The world is neither deliberately fair nor deliberately unfair”, and my comment was meant to be interpreted as replying, “Of course the world is unfair—if it’s not fair, it must be unfair—and it doesn’t matter that it’s accidental rather than deliberate.” Also to counteract the deep wisdom aura that “The world is neither fair nor unfair” gets from counterintuitively violating the (F \/ ~F) axiom schema.
It matters hugely that it’s not deliberately unfair. People get themselves into really awful psychological holes—in particular the lasting and highly destructive stain of bitterness—by noting that the world is not fair, and going on to adopt a mindset that it is deliberately unfair.
It matters a lot (to those who are vulnerable to the particular kind of irrational bitterness in question) that the universe is not deliberately unfair.
I took Eliezer’s “it doesn’t matter” to be the more specific claim “it does not matter to the question of whether the universe is unfair whether the unfairness present is deliberate or not-deliberate”.
Err, the “question of whether the universe is unfair” sounds a lot to me like the “question of whether the tree makes a sound”. What query are we trying to hug here? I think what I call “unfairness”—something due to some agent—is something we can at least sometimes usefully respond by being pissed off, because the agent doesn’t want us to be pissed off. But the Universe absolutely cannot care whether we’re pissed off, and so putting it under the same category as eg discrimination engenders the wrong response.
What makes being pissed off at an agent who treats me unfairly useful is not that the agent doesn’t want me to be pissed off. In fact, I can sometimes be usefully pissed off at an unfair agent that is entirely indifferent to, or even unaware of, my existence. In much the the same way, I can sometimes be usefully pissed off at a non-agent that behaves in ways that I would classify as “unfair” if an agent behaved that way.
Admittedly, asking when it’s useful to classify something as “unfair” is different from asking what things are in fact unfair.
On the other hand, in practice the first of those seems most relevant to actual human behavior. The second seems to pretty quickly lead to either the answer “everything” (all processes result in output distributions that are not evenly distributed across some metric) or “nothing” (all processes are equally constrained and specified by physical law) and neither of those answers seems terribly relevant to what anyone means by the question.
No, that fairness isn’t a characteristic you can measure of the world. There’s such a thing as fairness when it comes to eg dividing a cake between children.
“The world is fair” =
world.fairness > 0
“The world is unfair” =
world.fairness < 0
“The world is neither fair nor unfair” =
world.fairness == 0
, or something like this.I didn’t think I could remove the quote from that attitude about it very effectively without butchering it. I did lop off a subsequent sentence that made it worse.
Do people typically say “life isn’t fair” about situations that people could choose to change?
Don’t they usually say it about situations that they could choose to change, to people who don’t have the choice?
Exactly. In my experience the people who say “life isn’t fair” are the main reason that it still isn’t.
How did you develop a sufficiently powerful causal model of “life” to establish this claim with such confidence?
i mean that in almost all of the situations where I’ve heard that phrase used, it was used by someone who was being unfair and who couldn’t be bothered to make a real excuse.
Okay, but that is a very different claim. It could be true even while most sources of unfairness in life are other things, not people who bother to say “life’s not fair”.
I agree, it’s usually used as an excuse not to try to change things.
Introspection tells me this statement usually gets trotted out when the cost of achieving fairness is too high to warrant serious consideration.
EDIT: Whoops, I just realised that my imagination only outputted situations involving adults. When imagining situations involving children I get the opposite of my original claim.
Could you give an example of such a situation where the cost of achieving “fairness” is indeed too high for you? Because I have a hunch that we differ not so much in our assessment of costs but in our notions of “fairness”. Oh, and what is “Serious consideration”? Is a young man thinking of what route he should set his life upon and wanting to increase “fairness” doing more or less serious consideration than an adult thinking whether to give $500 to charity?
Current example: A friend of mine telling her very intelligent son that he has to do boring schoolwork because life isn’t fair.
It occurs to me to ask her whether a good gifted and talented program is available.
Hmm? I know I’m no-one to tell you those things and it might sound odd coming from a stranger, but… please try persuading her to attend to the kid’s special needs somehow. Ideally, I believe, he should be learning what he loves plus things useful in any career like logic and social skills, with moderate challenge and in the company of like-minded peers… but really, any improvement over either the boredom of standard “education” or the strain of a Japanese-style cram school would be fine. It pains me to see smart children burning out, because it happened to me too.
I’ve talked with her. Her son is already in a Gifted and Talented program, but they’re still expecting too much busy work from him—he’s good at learning things that he’s interested in the first time he hears them, and doesn’t need drilling.
He’s got two years more of high school to go.
I’ve convinced her that it’s worthwhile to work on convincing the school that they should modify the program into something that’s better for him, and also that it’s good for him to learn about advocacy as well as (instead of?) accommodation. I think she cares enough that this isn’t going to fall off the to do list, but I’ll ask again in a couple of months.
Thanks for pushing about this.
Great. That’s going to brighten up a very very shitty day I’m having, BTW. I got my father moderately angry and disappointed in me for an insubstantial reason (he’s OK but kind of emotional and has annoying expectations), and then my mom phoned from work in tears to say that her cat electrocuted itself somehow. I have just got very high on coffee to numb emotion and am browsing LW right now until I can take a peek at reality again.
Me, I’ve burned out many times in school. Each time it happened, I was sent to psychiatrists as punishment.
I don’t remember exactly what I imagined, but it was something like this:
Actually, I’d say that it could be a case where justice can assert itself… the boss is, barring unusual circumstances, going to lose out on a skilled worker and that could impact his business.
(I mean, presumably the overly high cost of achieving fairness in that case would be passing a law telling employers how to make hiring decisions… but that idiot of a boss would benefit from such a law if the heuristics in it were good; now he’s free to shoot himself in the foot!)
Bob is telling Alice that life isn’t fair. Bob is Alice’s friend; he is not the boss. Bob seems like he has Alice’s interests in mind, since it is unlikely that Alice “doing something about it” would be worth it (such as confronting the boss, suing the company, picketing on the street outside the building, etc...). She is probably better off just continuing her job search. This is independent of whether or not Alice’s decision is best for society as a whole.
Oh, that makes sense.
The problem with saying that we should make life more fair is that life is often unfair with regard to our ability to make it more fair.
The automatic pursuit of fairness might lead to perverse incentives. I have in mind some (non-genetically related) family in Mexico who don’t bother saving money for the future because their extended family and neighbours would expect them to pay for food and gifts if they happen to acquire “extra” cash. Perhaps this “Western” patriarchal peculiarity has some merit after all.
Is this really about fairness? Seems like different people agree that fairness is a good thing, but use different definitions of fairness. Or perhaps the word fairness is often used to mean “applause lights of my group”.
For someone fairness means “everyone has food to eat”, for another fairness means “everyone pays for their own food”. Then proponents of one definition accuse the others of not being fair—the debate is framed as if the problem is not different definitions of fairness, but rather our group caring about fairness and the other group ignoring fairness; which of course means that we are morally right and they are morally wrong.
IDK, but I have heard people refer to fairness in similar situations, so I am merely adopting their usage.
I agree. To a large degree the near universal preference for “fairness” in humans is illusory, because people mean mutually contradictory things by it.
I believe “fairness” can be given a fairly rigorous definition (I have in mind people like Rawls), but the second you get explicit about it, people stop agreeing that it is such a good thing (and therefore, it loses its moral force as a human universal).
One wonders whether food and gifts translate into status more or less effectively than whatever they might buy to that end in “Western” society would. Scare quotes because most of Mexico isn’t much more or less Western than the US, all things considered.
Yeah, the scare quotes are because I dislike the use of “Western” to mean English-speaking cultures rather than the Greek-Latin-Arabic influenced cultures.
I’m not convinced fairness is inherently valuable.
Envy is an unpleasant emotion that should probably be eliminated.
I like being part of egalitarian social groups, but I don’t think status inequality has to follow inevitably from material inequality.
I don’t think that fairness is terminally valuable, but I think it has instrumental value.
Mad Men, “My Old Kentucky Home”
Another good one from Don Draper:
This is mistaken because systems can and do assemble out of sufficiently similar people pursuing self interest in a way that ends up coordinated because their motivations are alike. Capitalism is the simplest and most obvious example of such a system, but I’d argue things like patriarchy and racism are similar.
The point is the system doesn’t have a particular overriding goals, or central coordination, and isn’t interested in you personally. In context, he was speaking to counter-culture people who thought the system was against them, in an ego satisfying way that makes them feel significant. He counter that it is simply indifferent to them.
Arthur C. Clarke
The trouble is, the most problematic kinds of faith can survive it just fine.
Which leads us to today’s Umeshism: “Why are existing religions so troublesome? Because they’re all false, the only ones that exist are so dangerous that they can survive the truth.”
I’m not sure if I can really call myself Gnostic, but if I can, mine’s neither troublesome*, nor does it make any claims inconsistent with a sufficiently strong simulation hypothesis.
-* (when e.g. Voegelin was complaining about “Gnostic” ideas of rearranging society, he was 1) obviously excluding any transformation he approved of, perhaps considering it “natural” and not dangerous meddling, and 2) blaming a fairly universal kind of radicalism correlated with all monotheistic or quasi-monotheistic worldviews; he’s essentially privileging the hypothesis to vent about personality types he dislikes, and conservatives should really look at these things more objectively for the sake of their own values)
Um, no. He was complaining about attempts to rearrange society from the top down.
The problem is, hardly anyone else would describe a person who’s actually in a position of power to do the rearranging—like e.g. Lenin—as “Gnostic”; he has certainly been known as a dreamer blind to reality, but as I pointed out that’s a very general indictment. The way it’s actually used throughout history, “Gnosticism” has the connotations of a monastic life and mystical pursuits, detached from daily life or outright fleeing from society; after all, no leader who actually left a noticeable mark on society has ever been called that. Many parallels have been drawn between Marxism/Facscism/transhumanism/etc and religious fundamentalism, but those parallels did not include a persecuted, non-populist and underground branch of a religion.
The word has always been associated with “heresy”, and a tendency that’s imposing its own dogma & suppressing opposition is not called a “heresy”. Voegelin should’ve introduced a new term for the category of people he wanted to indict instead of appropriating an unsuitable word.
That’s very nice to say, but people are apt to find giving up some faiths very emotionally wrenching and socially costly (even if the faith isn’t high status, a believer is likely to have a lot of relationships with people who are also believers). Now what?
The other day I was thinking about Discworld, and then I remembered this and figured it would make a good rationality quote...
-- Terry Pratchett, Feet of Clay
Reminded of a quote I saw on TV Tropes of a MetaFilter comment by ericbop:
Sounds like Vimes doesn’t like Sherlock Holmes much.
Gee, you think?
Well, the quote made me think of this. Now that I looked up that post I notice that it is downvoted, so perhaps it isn’t relevant. But the behavior that Vimes expresses distrust of in the Pratchett quote is pretty much the exact behavior that is used to show off how intelligent/perceptive Holmes is, and which the poster wants to use as an example for rationalists.
It is relevant and obvious. I suppose it was downvoted for the latter.
Douglas Adams, Dirk Gently’s Holistic Detective Agency
-- C. S. Lewis
-G. K. Chesterton, The Curse of the Golden Cross
-- Douglas Adams. The Long Dark Tea-Time of the Soul (1988) p.169
I can’t find the quote easily (it’s somewhere in God, No!), but Penn Jillette has said that one aspect of magic tricks is the magician putting in more work to set them up than anyone sane would expect.
I’m moderately sure that he’s overestimating how clearly the vast majority of people think about what’s needed to make a magic trick work.
His partner Teller says the same thing here:
Edit: That trick is 19 minutes and 50 seconds into this video.
It’s not clear to me that clear thought on the part of the audience is necessary to make that statement true.
Yes, exactly the same idea. Partial versions of your quote have been posted twice in LW already, and might have inspired me to post the Chesterton prior version, but I liked seeing the context for the Adams one that you provide.
Out of context, the quote makes much less sense; the specific example illustrates the point much better than the abstract description does.
Just for fun, which of the following extremely improbable events do you think is more likely to happen first:
1) The winning Mega Millions jackpot combination is 1-2-3-4-5-6 (Note that there are 175,711,536 possible combinations, and drawings are held twice a week.)
2) The Pope makes a public statement announcing his conversion to Islam (and isn’t joking).
Assuming that the 123456 winning must occur by legit random drawing (not a prank or a bug of some kind that is biased towards such a simple result) then I’d go for the Pope story as ]more likely to happen any given day in the present. After all, there have been historically many examples of highly ranked members of groups who sincerely defect to opposing groups, starting with St. Paul. But I confess I’m not very sure about this, and I’m too sleepy to think about the problem rigorously.
In the form you posed the question (“which is more likely to happen first”) it is much more difficult to answer because I’d have to evaluate how likely are institutions such as the lottery and the Catholic Church to persist in their current form for centuries or millennia.
Good point.
It’d be even more fun if you replaced “1-2-3-4-5-6” with “14-17-26-51-55-36″. (Whenever I play lotteries I always choose combinations like 1-2-3-4-5-6, and I love to see the shocked faces of the people I tell, tell them that it’s no less likely than any other combination but it’s at least easier to remember, and see their perplexed faces for the couple seconds it takes them to realize I’m right. Someone told me that if such a combination ever won they’d immediately think of me. (Now that I think about it, choosing a Schelling point does have the disadvantage that should I win, I’d have to split the jackpot with more people, but I don’t think that’s ever gonna happen anyway.))
Dunno how you would count the (overwhelmingly likely) case where both Mega Millions and the papacy cease to exist without either of those events happening first, but let’s pretend you said “more likely to happen in the next 10 years”… Event 1 ought to happen 0.6 times per million years in average; I dunno about the probability per unit time for Event 2, but it’s likely about two orders of magnitude larger.
Aren’t you choosing an anti-Schelling point? It seems to me that people avoid playing low Kolmogorov-complexity lottery numbers because of a sense that they’re not random enough—exactly the fallacious intuition that prompts the shocked faces you enjoy.
Choosing something that’s “too obvious” out of a large search space can work if you’re playing against a small number of competitors, but when there are millions of people involved, not only are some of them going to un-ironically choose “1-2-3-4-5-6″, but more than one person will choose it for the same reason it appeals to you.
Thank you for that insightful observation.
Just to follow up, army1987′s actual choice is:
So whether this choice is Schelling or anti-Schelling depends on reference sets that are quite fuzzy on the specified information, to wit, the set of non-random-seeming selections and (the proportion of players in) the set of people who play them.
I still think many more people pick any given low Kolmogorov-complexity combination than any given high Kolmogorov-complexity combination, if anything because there are fewer of the former. If 0.1% of the people picked 01-02-03-04-05 / 06 and 99.9% of the people picked a combination from http://www.random.org/quick-pick/ (and discarded it should it look ‘not random enough’), there’d still be 175 thousand times as many people picking 01-02-03-04-05 / 06 as 33-39-50-54-58 / 23. (Likewise, the fact that the most common password is
password
doesn’t necessarily mean that there are lots of idiots: it could mean that 0.01% of the people pick it and 99.99% pick one of more than 9,999 more complicated passwords. Not that I’m actually that optimistic.)With this in mind I think I would choose combinations that match the pattern /[3-9][0-9][3-9][0-9][1-6][0-9]/. Six digit numbers look too much like dates!
1-2-3-4-5-6 is a Schelling point for overt tampering with a lottery. That makes it considerably more likely to be reported as the outcome to a lottery, even if it’s not more likely to be the outcome of a stochastic method of selecting numbers.
After seeing quite a few examples, I’ve recently become very sensitive to comparisons of an abstract idea of something with an objective something, as if they were on equal footing. Your question explicitly says the Pope conversion is a legitimate non-shenanigans event, while not making the same claim of the lottery result. Was that intentional?
No, I just didn’t think of it. (Assume that I meant that, if someone happens to have bought a 1-2-3-4-5-6 ticket, they would indeed be able to claim the top prize.)
I might not have worded that very clearly.
You said that the Pope was definitely not joking, (or replaced by a prankster in a pope suit), but left it open as to whether the lottery result was actually a legitimate sequence of numbers drawn randomly from a lottery machine, or somehow engineered to happen.
In that sense, you’re comparing a very definite unlikely event (the Pope actually converting to Islam) to a nominally unlikely event (1-2-3-4-5-6 coming up as the lottery results, for some reason that may or may not be a legitimate random draw). Was that intentional?
No, but if someone successfully manages to rig the lottery to come up 1-2-3-4-5-6, and doesn’t get caught, I’d count that as an instance. Similarly, if the reason the Pope issued the public statement was that his brother was being held hostage or something, and he recants after he’s rescued, that’s good enough, too; I just wanted to rule out things like April Fools jokes, or off-the-cuff sarcastic remarks.
I don’t think that’s true. If you were going to tamper with the lottery, isn’t your most likely motive that you want to win it? Why, then, set it up in such a way that you have to share the prize with the thousands of other people who play those numbers?
I specified “overt tampering” rather than “covert tampering”. If you wanted to choose a result that would draw suspicion, 1-2-3-4-5-6 strikes me as the most obvious candidate.
Why would anyone want to do that? (I’m sure that any reason for that would be much more likely than 1 in 175 million, but still I can’t think of it.)
The three most obvious answers (to my mind) are:
1) to demonstrate your Big Angelic Powers
2) to discredit the lottery organisers
3) as a prank / because you can
The former will happen about every couple million years in average, so I’d say the latter is more likely by at least a factor of 100.
The ghost of Parnell is Far, the presentation to the Queen is Near?
Perhaps. I had thought of the quote in the context of a distinction between epistemic/Bayesian probability and physical possibility or probability. For us (though perhaps not for Father Brown) the ghost story is physically impossible, it contradicts the basic laws of reality, while the presentation story does not. (In terms of the MWI we might say that there is a branch of the wavefunction where Gladstone offered the Queen a cigar, but none where a ghost appeared to him.) However, we might very well be justified in assigning the ghost story a higher epistemic probability, because we have more underlying uncertainty about (to use your words) Far concepts like the possibility of ghosts than about Near ones like how Gladstone would have behaved in front of the Queen.
I seem to instinctively assign the ghost story a lower probability. The lesson of the quote might still be valid, can you come up with an example that would work for me?
Sure. Take one mathematical fact which the mathematical community accepts as true, but which has a complicated proof only recently published and checked. Surely your epistemic probability that there is a mistake in the proof and the theorem is false should be larger than the epistemic probability of the Gladstone story (if you are not convinced, add more outrageous details to it, like Gladstone telling the Queen “What’s up, Vic?”). But according to your current beliefs, in the actual world the theorem is necessarily true and its negation impossible, while the Gladstone story is possible in the MWI sense.
Whuh? I have logical uncertainty about the theorem.
(George Orwell’s review of Mein Kampf)
(well, we have videogames now, yet… we gotta make them better! more vicseral!)
I don’t see that that’s true. Germany loved Hitler when he was giving them job security and easy victories and became much less popular once the struggle and danger and death arrived on the scene.
They grumbled, but 95% of them obeyed, worked, killed and died up until the spring of 1945. A huge amount of Germans certainly believed that sticking with the Nazis until the conflict’s end was a much lesser evil compared to another national humiliation on the scale of Versallies. And look at the impressive use to which him and Goebbels put evaporative cooling of group beliefs to radicalize the faithful after the July plot. Purging a few malcontents led to a significant increase in zeal and loyalty even as things were getting visibly worse and worse.
Full review here:
There’s a pretty good and complete archive of all things by St. George at orwell.ru, by the way. As a pleasant exercise, I’m going to go through the Russian translations over there and see if I can correct anything.
On politics as the mind-killer:
-- Julian Sanchez (the whole post is worth reading)
Does anyone know the exact quote to which he is referring here?
We’ve reached the point where the weather is political, and so are third person pronouns.
Well, third-person pronouns were always political—it’s just that only the last century’s shift in values and ideological attitudes has allowed the spread of gender-neutral pronouns. Before that the issue was taken to be completely one-sided.
Conversely, evolution does not count as “political” here because we all belong to one camp. (Posted from Louisiana.)
I think it’s this but I’m not sure:
Tell that to Socrates.
Given that they supposedly drowned people for discussing irrational numbers that seems false.
Sorry to have to tell you this, but Pythagoras of Samos probably didn’t even exist. More generally, essentially everything you’re likely to have read about the Pythagoreans (except for some of their wacky cultish beliefs about chickens) is false, especially the stuff about irrationals. The Pythagoreans were an orphic cult, who (to the best of our knowledge) had no effect whatsoever on mainstream Greek mathematics or philosophy.
Source?
Well, my source is Dr Bursill-Hall’s History of Mathematics lectures at Cambridge; I presume his source is ‘the literature’. Sorry I can’t give you a better source than that.
Can anyone confirm this? Preferably with citation?
It’s not like the United States hasn’t also killed people for betraying its secrets.
Wait, is there any actual disagreement about what happened? I’m reading older Julian Sanchez posts, but the only point of disagreement seems to be “Once Zimmerman confronted Martin with a gun, did Martin try to disarm him before getting shot?”. None of what I’ve read considers the question relevant; they base their judgements on already known facts such as “someone shot someone else then was let free rather than have a judge decide whether it counted as self-defense”.
There’s substantial disagreement about the facts. For example, someone was heard yelling for help, but no one agrees whether that was Zimmerman or Martin.
I can talk about Stand-Your-Ground laws and their apparent effect in this case, but I don’t want to drone on.
There is the minor matter of people trying to very hard to spin and misrepresent events. At this point I can’t help but link to this very relevant Aurini talk on the subject.
Thank you for the link!
Checking out some of his other videos and links I found this podcast on the topic rather interesting commentary.
Especially the summary of facts starting at the 23 minute mark.
Link doesn’t work. Here is a new one.
Thank you! Fixed the link to match yours.
Yes I listened to that podcast as well.
I am much more confident that Zimmerman was not the attacker than I was about the innocence of Amanda Knox. His instant demonization and near lynching (people putting out a dead or alive bounty) seems a very troubling development for American society.
More justice for Trayvon I guess.
“Muad’Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It is shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad‘Dib knew that every experience carries its lesson”
Frank Herbert, Dune
It took me years to learn not to feel afraid due to a perceived status threat when I was having a hard time figuring something out.
A good way to make it hard for me to learn something is to tell me that how quickly I understand it is an indicator of my intellectual aptitude.
Interesting article about a study on this effect:
This seems like a more complicated explanation than the data supports. It seems simpler, and equally justified, to say that praising effort leads to more effort, which is a good thing on tasks where more effort yields greater success.
I would be interested to see a variation on this study where the second-round problems were engineered to require breaking of established first-round mental sets in order to solve them. What effect does praising effort after the first round have in this case?
Perhaps it leads to more effort, which may be counterproductive for those sorts of problems, and thereby lead to less success than emphasizing intelligence. Or, perhaps not. I’m not making a confident prediction here, but I’d consider a praising-effort-yields-greater-success result more surprising (and thus more informative) in that scenario than the original one.
I agree that the data doesn’t really distinguish this explanation from the effect John Maxwell described, mainly I just linked it because the circumstances seemed reminiscent and I thought he might find it interesting. Its worth noting though that these aren’t competing explanations: your interpretation focuses on explaining the success of the “effort” group, and the other focuses on the failure of the “intelligence” group.
To help decide which hypothesis accounts for most of the difference, there should really have been a control group that was just told “well done” or something. Whichever group diverged the most from the control, that group would be the one where the choice of praise had the greatest effect.
I think the universe is not usually engineered to perversely punish effort. Extra effort may sometimes be counter productive… but I think most people I know fail more often for too little effort than for too much. Use the Try Harder, Luke is usually good advice.
I agree, so if you intended this as a counterpoint, it seems to follow that I have inconsistent beliefs. If so, can you expand?
I’m inferring more than you said, which isn’t making it easy for anyone to understand me. Sorry about that.
If you think your comment discusses an edge case, and that it’s a good general practice to praise/reward effort rather than intelligence, then we are in agreement and this conversation should probably end. If you think it’s a good general practice to spend the cognitive effort required to scan the world for situations where each type of praise/reward would most help… then I think we’re disagreeing.
Long comment following—summary at bottom.
Dweck’s work sounded a strong chord for me. I was an intelligent kid often praised for my intelligence, and often very scared that I would be discovered not to be as intelligent as everyone seemed to think I was (because the world was full of stuff that I wasn’t immediately good at). I therefore avoided many pursuits that I thought would lead others to discover their previous overestimate of my innate, fixed intelligence. I think there are many children and adults who live in that place (I think that, for example, there is a lot of evidence in Eliezer’s writing that he has a fixed conception of intelligence (eg. http://lesswrong.com/lw/bdo/rationality_quotes_april_2012/68n2). I also think that praise of my intelligence in my youth had a strong influence on my forming that model (fixed intelligence, not being good at something immediately is evidence that you’re not as clever as they thought).
After reading Dweck’s work I’ve tried hard to alter my model of the universe. Innate intelligence obviously varies between individuals… but that’s not very helpful or important to me, and spending time thinking about it doesn’t help me much. As an individual with whatever innate capacity I have I benefit much more by considering the very significant impact my efforts have on what I can understand and what I can achieve. Anyone I meet who praises me for my (innate, fixed) intelligence undermines my efforts to focus on what I can change, so hurts my efforts at self improvement. Anyone who praises me for something I can change (effort, technique, practice, diligence, etc.) helps me to become a better person.
I think this is particularly important with children—watching someone praise a child for a fixed trait now causes me to flinch as if that child had just been slapped.
Summary:
I think it likely that there exist edge cases where praising intelligence will boost performance on some particular following task, but I think that in nearly all cases the person thus praised will suffer over the longer term due to the much greater frequency of tasks that that form of praise hurts. I think that most people in most cases will benefit more from Dweck style praise of effort (more precisely, any trait they can control), and that that’s more true over longer timeframes.
Well, what my comment discusses is a potential direction of research, and makes some predictions about the results of that, and isn’t really about application at all.
As far as application goes, I agree that it’s a good general practice to praise/reward effort rather than intelligence. Also to reward effort rather than strength, dexterity, attractiveness, and various other attributes.
More generally, I think it’s a good practice to reward behaviors rather than attributes. Rewarding behaviors gets me more of those behaviors. Rewarding attributes gets me nothing predictable.
There’s something to be said for rewarding results instead of effort to teach people to make sure they are actually trying rather than trying to try.
Better results than fixed attributes, certainly. No objection to rewarding results as well. My primary concern with rewarding results instead is that it seems to create the incentive to only tackle problems I’m confident I can succeed at.
I’ve seen this study cited a lot; it’s extremely relevant to smart self- and other-improvement. But there are various possible interpretations of the results, besides what the authors came up with… Also, how much has this study been replicated?
I’d like to see a top-level post about it.
Dupe
-Carl Rogers, On Becoming a Person: A Therapist’s View of Psychotherapy (1961)
David Pearce
This is analogous to my main worry as someone who considers himself a part of the anti-metaphysical tradition (like Hume, the Logical Positivists, and to an extent Less Wrongers): what if by avoiding metaphysics I am simply doing bad metaphysics.
As an experiment, replace ‘metaphysics’ and ‘metaphysical’ with ‘theology’ and ‘theological’ or ‘spirituality’ and ‘spiritual’. Then the confusion is obvious.
Unless I don’t understand what you mean by metaphysics, and just have all those terms bunched up in my head for no reason, which is also possible.
Yes. There is a difference between speaking imprecisely because we don’t know (yet) how to express it better, and speaking things unrelated to reality. The former is worth doing, because a good approximation can be better than nothing, and it can help us to avoid worse approximations.
Well, but what it that is meant by metaphysics? I’ve heard the word many times, seen its use, and I still don’t know what I’m supposed to do with it.
Ok, so now I’ve read the Wikipedia article, and now I’m unconvinced that when people use the term they mean what it says they mean. I know at least some people who definitely used “metaphysical” in the sense of “spiritual”. What do you mean by metaphysics?
Also unconvinced that it has any reason to be thought of as a single subject. I get the impression that the only reason these topics are together is that they feel “big”.
But I will grant you that given Wiki’s definition of metaphysics, there is no reason to think that it is in principle incapable of providing useful works. I revise my position to state that arguments should not be dismissed because they are metaphysical, but rather because they are bad. Furthermore, I suspect that “metaphysics” is just a bad category, and should, as much as possible, be expunged from one’s thinking.
We may be moving too fast when we expunge metaphysics from our web-of-belief. Say you believe that all beliefs should pay rent in anticipated experiences. What experiences do you anticipate only because you hold this belief? If there aren’t any, then this seems awfully like a metaphysical belief. In other words, it might not be feasible to avoid metaphysics completely. Even if my specific example fails, the metaphysicians claim to have some that succeed. Studying metaphysics has been on my to-do list for a long time (if only to be secure in my belief that we don’t need to bother with it), but for some reason I never actually do it.
(LessWrong implicitly assumes certain metaphysics pretty often, e.g. when they talk about “simulation”, “measure”, “reality fluid”, and so on; it seems to me that “anthropics” is a place where experience meets metaphysics. My preferred metaphysic for anthropics comes from decision theory, and my intuitions about decision theory come to a small extent from theological metaphysics and to a larger extent from theoretical computer science, e.g. algorithmic probability theory, which I figured is a metaphysic for the same reason that monadology is a metaphysic. ISTM that even if metaphysics aren’t as fundamental as they pretend to be, they’re still useful and perhaps necessary for organizing our experiences and intuitions so as to predict/understand prospective/counterfactual experiences in highly unusual circumstances (e.g. simulations).)
When some Lesswrong-users use ‘metaphysics’, they mean other people’s metaphysics. This is much like how some Christians use the term ‘religion’.
Hm… one rationale for such a designation might be: “A ‘metaphysic’ is a model that is at least one level of abstraction/generalization higher than my most abstract/general model; people who use different models than me seem to have higher-level models than I deem justified given their limited evidence; thus those higher-level models are metaphysical.” Or something? I should think about this more.
Your theory is much nicer than mine. Mine essentially amounts to people believing “I understand reality, your beliefs are scientifically justified, he endorses metaphysical hogwash.” Further, at least since the days of the Vienna Circle, some scientifically-minded individuals have used ‘metaphysics’ as a slur. (I mean, at least some of the Logical Positivists seriously claimed that metaphysical terms were nonsense, that is, having neither truth-value nor meaning.)
I have read Yudkowsky discuss matters of qualia and free will. This site contains metaphysics, straight up. I assume that anyone who dismisses metaphysics is either dismissing folk-usage of the term or is taking too much pride in their models of reality (that latter part does somewhat match your stipulative explanation.)
(Oh, I’m not sure if your joke was intentional, but I still think it is funny that some possible humans would reject metaphysics for being ‘models’ which are too ‘abstract’, ‘of higher-level’, and not ‘justified’ given the current ‘evidence’.)
Agreed that Will’s theory is nicer than yours. That said, with emphasis on “some,” I think yours is true. Although the Christians I know are far more likely to use “religion” to refer to Christianity. (Still more so are the Catholics I know inclined to use “religion” to refer to Catholicism.)
I was just referring to some Protestants who will share such statements as “Christianity isn’t a religion, it’s a relationship” or “I hate religion too. That’s why I believe in Jesus.” Of course, most Protestants do not do this.
Ah, I see. The Christians I know are more prone to statements like “Religion is important, because it teaches people about the importance of Jesus’ love.”
Just came across a comment by Deogolwulf in response to a comment on one of Mencius Moldbug’s posts:
Oh, snap!
I couldn’t find the original on a quick Google, but:
Which is to say, believing that something can be entirely explained in terms of something else doesn’t absolve me from the need to deal with it. Even if I and the bull and my preference to remain alive can all be entirely captured by the sufficiently precise specification of a set of quarks, it doesn’t follow that there exists no such person, no such bull, or no such preference.
The argument was a meta-level undermining argument supporting the necessity of metaphysical reasoning (of the exact sort that you’re engaging in in your comment);—it wasn’t an argument about the merits of reductionism. That would likely have been clearer had I included more context; my apologies.
(nods) Context is often useful, agreed.
Also, metaphysical reasoning is often necessary, agreed.
Sadly, I often find it necessary in response to metaphysical reasoning introduced to situations without a clear sense of what it’s achieving and whether that end can be achieved without it.
In this sense it’s rather like lawyers.
Not that I’m advocating eliminating all the lawyers, not even a little.
Lawyers are useful.
They’re even useful for things other than defending oneself from other lawyers.
But I’ve also seen situations made worse because one party brought in a lawyer without a clear understanding of the costs and benefits of involving lawyers in that situation.
I suspect that a clear understanding of the costs and benefits of metaphysical reasoning is equally useful.
Where is that quote from, out of curiosity ?
If I could remember that, I probably could have found it on Google in the first place.
...fair enough. I tried looking on Google, and couldn’t find it either. Perhaps your quote is original enough for you to claim authorship :-/
Perhaps? I’m fairly sure I read it somewhere, but my memory is unreliable.
Deogolwulf is the sort of fellow who uses ‘proposition’ while obviously meaning ‘statement’. Also, some of the first paragraph is pure unreflective sophistry. Still, the second half:
Following this epistemic attack, I am imagining Deogolwulf holding up a mirror to TGGP’s face and stating “No, TGGP, you are the metaphysics.”
I think part of the problem is different scenes of the word “reduce”. Consider the following two statements:
1) All things ultimately reduce to quarks (nitpick: and leptons)
2) Quarks and leptons ultimately reduce to quantum wave functions.
3) Quantum wave functions ultimately reduce to mathematics.
4) All mathematics ultimately reduces to the ZFC axioms.
Notice that all these statements are true (I’m not quite sure about the first one) for slightly different values of “reduces”.
What?
When someone on Lesswrong uses the term ‘simulation’, they are probably making some implicit metaphysical claims about what it means for some object(A) to be a simulation of some other object(B). (This particular subject often falls under the part of metaphysics known as ontology.)
The same applies to usage of most terms.
Correct me if I’m wrong, but “They are probably making some implicit metaphysical claims about what it means for some object(A) to be a simulation of some other object(B).” and “They are probably making some implicit claims about what it means for some object(A) to be a simulation of some other object(B)” mean exactly the same thing.
They do happen to mean the same thing. This is because the question “What does it mean for some y to be an x?” is a metaphysical question.
“They are probably making some aesthetic claim about why object(A) is more beautiful than object(B)” and “They are probably making some claim about why object(A) is more beautiful than object(B)” also mean the same thing.
Come to that, they both probably mean the same thing as “They are probably making some implicit claims about how some object(B) differs from some other object (A) it simulates,” which eliminates the reference to meaning as well.
Well, that’s a “should” statement, so we cash it out in terms of desirable outcomes, e.g.:
People who spend more time elaborating on their non-anticipatory beliefs will not get as much benefit from doing so as people who spend more time updating anticipatory beliefs.
If two people (or groups, or disciplines) ostensibly aim at the same goals, and deploy similar amounts of resources and effort; but one focuses its efforts with anticipation-controlling beliefs while the other relies on non-anticipation-controlling beliefs, then the former will achieve the goals more than the latter. (Examples could be found in charities with the goal of saving lives; or in martial arts schools with the goal of winning fights.)
Where Recursive Justification Hits Bottom—EY
Can you give any examples of modern metaphysics being useful?
Ontology begat early AI, which begat object-oriented programming.
I anticipate to experience more efficient thinking, because I will have to remember less and think about less topics, while achieving the same results.
What do you anticipate to experience after studying metaphysics (besides being able to signal deep wisdom)?
I anticipate understanding the abstract nature of justification, thus allowing me to devise better-justified institutions. I anticipate understanding cosmology and its role in justification, thus allowing me to understand how to transcend the contingent/universal duality of justification. I anticipate understanding infinities and their actuality/non-actuality and thus what role infinities play in justification. I anticipate graving new values on new tables with the knowledge gleaned from a greater understanding of justification—I anticipate seeing what both epistemology and morality are special cases and approximations of, and I anticipate using my knowledge of that higher-level structure to create new values. And so on.
You might be better off studying mathematics, then.
That too, yes. Algorithmic probability is an example of a field that is pretty mathematical and pretty metaphysical. It’s the intellectual descendant of Leibniz’s monadology. Computationalism is a mathematical metaphysic.
If you would be so kind as to try and tell me what you mean by “metaphysic”, I would be much less confused.
By “metaphysic” I mean a high-level model for phenomena or concepts that you can’t immediately falsify because, though the model explains all of the phenomena you are aware of, the model is also very general. E.g., if you look at a computer processor you can say “ah, it is performing a computation”, and this constrains your anticipations quite a bit; but if you look at a desk or a chair and say “ah, it is performing a computation”, then you’ve gotten into metaphysical territory: you can abstract away the concept of computation and apply it to basically everything, but it’s unclear whether or not doing so means that computation is very fundamental, or if you’re just overapplying a contingent model. Sometimes when theorizing it’s necessary to choose a certain metaphysic: e.g., I will say that I am an instance of a computation, and thus that a computer could make an exact simulation of me and I would exist twice as much, thus making me less surprised to find myself as me rather than someone else. Now, such a line of reasoning requires quite a few metaphysical assumptions—assumptions about the generalizability of certain models that we’re not sure do or don’t break down—but metaphysical speculation is the best we can do because we don’t have a way of simulating people or switching conscious experience flows with other people.
That’s one possible explanation of “metaphysic”/”metaphysics”, but honestly I should look into the relevant metaphilosophy—it’s very possible that my explanation is essentially wrong or misleading in some way.
Why would generality be opposed to falsifiability? Wouldn’t having a model be more general lead to easier falsifiability, given that the model should apply more broadly?
In order to tell whether something is performing a computation, you try to find some way to get the object to exhibit the computation it is (allegedly) making. So—if I understand correctly—then a model is metaphysical, in the things you write, if applying it to a particular phenomenon requires an interpretation step which may or may not be known to be possible. How does this differ from any other model, except that you’re allowing yourself to be sloppy with it?
If you just replace “metaphysic” by “model”, “metaphysical assumptions” by “assumptions about our models and their applicability”, “metaphysical speculation” by “speculations based on our models”, I think the things you’re trying to say become clearer. If a bit less fancy-sounding.
If the thing I understood is the thing you tried to say.
I could replace all my uses of the word “metaphysical” with “sloppily-general”, I guess, but I’m not sure it has quite the right connotations, and “metaphysical” is already the standard terminology. “Metaphysical” is vague in a somewhat precise way that “sloppily-general” isn’t. I appreciate the general need for down-to-earth language, but I also don’t want to consent to the norm of encouraging people to take pains to write in such a way as to be understood by the greatest common factor of readers.
“X is a metaphysic” becomes “X is somehow a model (of something), but I’m not sure how”. “Y is metaphysical” becomes “Y is about or related to a model (somehow)”. I assume my understanding is correct, since you didn’t correct it. “sloppily-general” is then indeed kind of far from the intended meaning, but that’s just because it’s a terrible coinage.
Elsewhere, somebody posted a link to the Stanford Encyclopedia of Philosophy’s definition of metaphysics. They say right in the intro that they haven’t found a good way to define it. The Wikipedia article on metaphysics’s body implies a different definition than its opening paragraph. In common parlance, it’s used for some vague spiritualish thing. And your definition is different from all of these. Do you think that the term could reasonably be expected to be understood the way you intended it to?
“Metaphysical” isn’t vague in a somewhat precise way. It isn’t even evocative, as its convoluted etymology prevents even that. It’s just vague and used by philosophers.
The greatest common factor of readers isn’t even here. The point is more to be understood by readers at all. Don’t make your writing more obscure than it needs to be. Hard concepts are hard enough as is, without making the fricking idea of “somehow a model” worth 3 hours’ worth of discussion.
Sorry, I was just too lazy to correct it. Still too lazy.
I give up. Good night.
Metaphysics can’t even be a thing in a web of belief! It’s more a box for a bunch of things, with a tag that says “Ooo”. Unless you want to define it otherwise, or I’m more confused than I think I am. So the category only makes sense if you want to use it to describe your feelings for some given subject. Why would that be a good way to frame a field of study?
That’s what I suspect is problem with metaphysics; not the things in the box, which are arbitrary, rather that the box messes up your filing system.
Metaphysics, as a category, has its constituents determined by the contingent events of history. The same could be said for the categories of philosophy and art. As such, ‘metaphysics’ is a convenient bucket whose constituents do not necessarily have similarities in structure. At best, I think one could say that they have a Wittgensteinian family-resemblance. However, I am only defending the academic usage of the term. (More information here.) The folk usage seems to hold that metaphysics is “somewhere between “crystal healing” and “tree hugging” in the Dewey decimal system.”
Well that at least makes some sense. I was noticing that Wiki’s definition and the definition implied by its examples were in conflict. I don’t particularly see why the metaphysics bucket is convenient, though.
Is there any point in discussing metaphysics as anything other than a cultural phenomenon among philosophers?
Unless you are a cladist, ‘reptile’ is a bucket which contains crocodiles, lizards, and turtles, but does not contain birds and mammals. The word is still sometimes useful for communication.
It depends on your goals. I do not generally recommend it, however.
My claim was not about the general lack of utility of buckets. Briefly, the reptile bucket is useful because reptiles are similar to one another, and thus having a way to refer to them all is handy. There is apparently no such justification for “metaphysics”, except in the sense that its contents are related by history. But this clearly isn’t the use you want to make of this bucket.
The word ‘similar’ is often frustratingly vague. However, crocodiles and birds share a more recent common ancestor than crocodiles and turtles.
The word is nonetheless used. I do agree with you that it is frustrating that the word’s usage is historically determined.
Well then the term reptile is somewhat deceptive in evolutionary biology, and based more on some consensus about appearance. Fine. Whatever. The point is that the word metaphysics isn’t evocative in that way or any way, except in the context of its historical usage. As such, it cannot inform us in any way about any subject that isn’t the phenomenon of its acceptance as a field, and is not even a useful subject heading, being a hodgepodge. We can choose whether to continue to use it, and I don’t see why we should.
Within the field of philosophy, the usage is a fairly normal term, much like ‘reptile’ or ‘sex’ are normal terms for most people. Much of my vocabulary comes from that field and I am most comfortable using its terms. ‘Metaphysics’ is one of many problematic terms which are evocative to me, because I understand how these terms are used. Asking someone who studies philosophy to stop using ‘metaphysics’ is like asking someone who studies biology to stop using ‘species’.
However, it is your prerogative to use whatever terms you prefer. I am sure that we are both trying to be pragmatic.
Conventional usage seems to be: speaking about deep intangible topics.
Which is a bad categegory, because it contains: abstract thinking + supernatural claims + complicated nonsense; especially the parts good for signalling wisdom.
It’s a bit confusing in part because of its strange etymology. Originally, “meta” was used in the sense of “after”, since “metaphysics” was the unnamed book that came after “physics” in the standard ordering of Aristotle’s works. Later scholars accidentally connected that to something like our current usage of “meta”, and a somewhat arbitrary field was born.
George Pólya, How to Solve It
...and that’s why the rule doesn’t apply to the reference class of cases I just constructed to only contain my own, Officer.
At which point the officer will demonstrate in no uncertain terms who is the master in the current situation.
-- Marvin Minsky, The Society of Mind
Johan Liebert, Monster
Alfred North Whitehead, “An Introduction to Mathematics” (thanks to Terence Tao)
On specificity and sneaking on connotations; useful for the liberal-minded among us:
-celandine13
How about:
Someone who, following an honest best effort to evaluate the available evidence, concludes that some of the beliefs that nowadays fall under the standard definition of “racist” nevertheless may be true with probabilities significantly above zero.
Someone who performs Bayesian inference that somehow involves probabilities conditioned on the race of a person or a group of people, and whose conclusion happens to reflect negatively on this person or group in some way. (Or, alternatively, someone who doesn’t believe that making such inferences is grossly immoral as a matter of principle.)
Both (1) and (2) fall squarely under the common usage of the term “racist,” and yet I don’t see how they would fit into the above cited classification.
Of course, some people would presumably argue that all beliefs in category (1) are in fact conclusively proven to be false with p~1, so it can be only a matter of incorrect conclusions motivated by the above listed categories of racism. Presumably they would also claim that, as a well-established general principle, no correct inferences in category (2) are ever possible. But do you really believe this?
That (1) only makes sense if there is a “standard” definition of racist (and it’s based on what people believe rather than/as well as what they do). The point of the celandine13 was indeed that there’s no such thing.
The evidence someone’s race constitutes about that person’s qualities is usually very easily screened off, as I mentioned here. And given that we’re running on corrupted hardware, I suspect that someone who does try to “performs Bayesian inference that somehow involves probabilities conditioned on the race of a person” ends up subconsciously double-counting evidence and therefore end up with less accurate results than somebody who doesn’t. (As for cases when the evidence from race is not so easy to screen off… well, I’ve never heard anybody being accused of racism for pointing out that Africans have longer penises than Asians.)
I have seen accusations for racism as responses to people pointing that out.
Also, according to the U.S. Supreme Court even if race is screened off, you’re actions can still be racist or something.
In real life, you don’t have the luxury of gathering forensic evidence on everyone you meet.
I’m not talking about forensic evidence. Even if white people are smarter in average than black people, I think just talking with somebody for ten minutes would give me evidence about their intelligence which would nearly completely screen off that from skin colour. Heck, even just knowing what their job is would screen off much of it.
Also, as Eric Raymond discusses here, especially in the comments, you sometimes need to make judgements without spending ten minutes talking to everyone you see.
There’s this thing called Affirmative Action, as I mentioned elsewhere in this thread.
...
I facepalmed. Really, Eric? Sorry, I don’t think that a moral realist is perceptive enough to the nuances and ethical knots involved to be a judge on this issue. I don’t know, he might be an excellent scientist, but it’s extremely stupid to be so rash when you’re attempting serious contrarianism.
Yep, let’s all try to overcome bias really really hard; there’s only one solution, one desirable state, there’s a straight road ahead of us; Kingdom of Rationality, here we come!
(Yvain, thank you a million times for that sobering post!)
You know, there are countries where the intentional homicide rate is smaller than in John Derbyshire’s country by nearly an order of magnitude.
That thing doesn’t exist in all countries. Plus, I think the reason why you don’t see that many two-digit-IQ people among (say) physics professors is not that they don’t make it, it’s that they don’t even consider doing that, so even if some governmental policy somehow made it easier for black people with an IQ of 90 to succeed than for Jewish people with the same IQ, I would still expect a black physics professor to be smarter than (say) a Jewish truck driver.
That’s not the point. The point is that the black physics professor is less smart than the Jewish physics professor.
But the difference is smaller than for the median black person and the median Jewish person. (I said “even just knowing what their job is would screen off much of it”, not “all of it”.)
The bell curve has both the mean and the deviation, you can have a ‘race’ with lower mean and larger standard deviation, and then you can e.g. filter by reliable accomplishment of some kind, such as solving some problem that smartest people in the world attempted and failed, you may end up with situation that the population with lower mean and larger standard deviation will have fewer people whom attain this, but those whom do, are on average smarter. Set bar even higher, and the population with lower mean and larger standard deviation has more people attaining it. Also, the Gaussian distribution can stop being good approximation very far away from the mean.
edit: and to reply to grand grand parents: I bet i can divide the world into category that includes you, and a category that does not include you, in such a way that the category including you has substantially higher crime rate, or is otherwise bad. Actually if you are from US, I have a pretty natural ‘cultural’ category where your murder rate is about 5..10x of normal for such average income. Other category is the ‘racists’, i.e. the people whom use skin colour as evidence. Those people also have substantially bad behaviour. You of course want to use skin colour as evidence, and don’t want me to use your qualities as evidence. See if I care. If you want to use the skin colour as evidence, lumping together everyone that’s black, I want to use ‘use of skin colour as evidence’, lumping you together with all the nasty racists.
IIRC, no substantial difference was found in the standard deviations among races. (Whereas for genders, they have the same mean but males have larger sigma, so there are both more male idiots than female idiots and more male geniuses than female geniuses.)
Isn’t IQ defined to be a Gaussian (e.g. IQ 160 just means ‘99.99683rd percentile among people your age’), rather than ‘whatever IQ tests measure’? If so, a better statement of that phenomenon would be “IQ tests are inaccurate for extreme values.”
I want to use ‘use of “use of skin colour as evidence” as evidence’ as evidence, but I’m not sure what that’s evidence for. :-)
Even a small difference translates into enormous ratio between numbers of people, several standard deviations from the mean...
Yes, and it is defined to have specific standard deviation as well. That definition makes it unsuitable measure. The Gaussian distribution also arises from sum of multiple independent variables. The statement was about intelligence though, which is different thing from both “what IQ tests measure” and “how IQ is defined”.
Another huge failing of IQ is the non-measure of ability to build and use a huge search-able database of methods and facts. Building such database is a long-term memory task and can not be tested in short time span; the existing knowledge can’t be tested without massive influence by the background. Likewise, the IQ test lacks any problems that are actually difficult enough to have some solution methods that some people would know before the test, and some won’t.
Effectively, the IQ tests do not test for heavily parallel processing capability.
For example, I do believe that it would be possible to build ‘superhuman AI’ that runs on a cellphone and aces IQ tests, and could perhaps deceive a human in brief conversation. The same AI would never be able to invent a stone axe from scratch, let alone anything more complicated; it’d be nothing but a glorified calculator.
Well, the people who use skin colour as evidence, i would guess, are on average less well behaved than rest of society… so you can use it to guess someone’s criminality or other untrustworthiness.
Indeed, when I last took a few IQ tests I felt like I was being tested tested more for familiarity with concepts such as exclusiveOR, cyclical permutations, and similar basic discrete maths stuff than for processing power. (Of course, it does take insight to realize that such concepts are relevant to the questions and processing power to figure out the answer within the time frame of the test, but I think that if I had never heard about XOR or used Sarrus’ rule I would have scored much worse.)
ETA: This is also why I suspect that the correlations between race and IQ aren’t entirely genetic. If Einstein’s twin brother had grown up in a very poor region with no education...
A distribution with mean 100 and st. dev. 14 will exceed one with mean 90 and st. dev. 16 for all x between about 93 and about 170, and there aren’t that many people with IQs over 170 anyway.
But can we detect such a tiny difference as between std dev 14 and std dev 16 ? After we have to control for really many factors that are different between groups in question?
Also, that was my point, at the level of very high (one in million) intelligence, i.e. actual geniuses, the people you’d call genius without having to detect them using some test. I have a pet hypothesis about the last biological change which caused our technological progress. Little mixing with Neanderthals, raising the standard deviation somewhat.
The IQ test I think get useless past some point, when the IQ test savants that solve it at such level (but can’t learn very well for example, or can’t do problems well that require more of parallel processing), start to outnumber geniuses.
What sort of effect size do you expect here? Why?
You have the neonazis among those who use skin colour as evidence of criminality, but not among those who don’t. I don’t know of other differences that were demonstrated, my expectation for other effects is zero. I should expect the overall effect on order of at least the proportion of race motivated violence to overall violence; my expectation is somewhat higher than this though because I would guess that the near-neonazis are likewise more violent, including within-race crime.
Doh, missed the extra nesting. I doubt it’ll be evidence for much… both neonazis and liberal types use that as evidence, the former as evidence of ingroup-ness and the latter as evidence of badness, so I don’t see for what it would be discriminating.
I can’t remember whether I read this from someone else or came up with it on my own, but when people ask “do you oppose homosexual marriage” in questionnaires to find out political orientations, people answering “yes” will include both those who oppose homosexual marriage but are OK with heterosexual marriage, and those who oppose all marriage, and those groups are very different clusters in political space (paleo-conservatives the former, radical anarchists the latter). (Of course, the latter group is so much smaller than the former than if you’re doing statistics with large numbers of people this shouldn’t be such an issue.)
What if verbal ability and quantitative ability are often decoupled?
I wasn’t talking about “verbal ability” (which, to the extent that can be found out in ten minutes, correlates more with where someone grew up than with IQ), but about what they say, e.g. their reaction to finding out that I’m a physics student (though for this particular example there are lots of confounding factors), or what kinds of activities they enjoy.
If you’re able to drive the conversation like that, you can get information about IQ, and that information may have a larger impact than race. But to “screen off” evidence means making that evidence conditionally independent- once you knew their level of interest in physics, race would give you no information about their IQ. That isn’t the case.
Imagine that all races have Gaussian IQ distributions with the same standard deviation, but different means, and consider just the population of people whose IQs are above 132 (‘geniuses’ for this comment). In such a model, the mean IQ of black geniuses will be smaller than the mean IQ of white geniuses which will be smaller than the mean IQ of Jewish geniuses- so even knowing a lower bound for IQ won’t screen off the evidence provided by race!
Huh, sure, if the likelihood is a reversed Heaviside step. If the likelihood is itself a Gaussian, then the posterior is a Gaussian whose mean is the weighed average of that of the prior and that of the likelihood, weighed by the inverse squared standard deviations. So even if the st.dev. of the likelihood was half that of the prior for each race, the difference in posterior means would shrink by five times.
Right- there’s lots of information out there that will narrow your IQ estimate of someone else more than their race will, like that they’re a professional physicist or member of MENSA, but evidence only becomes worthless when it’s independent of the quantity you’re interested in given the other things you know.
Can you give an example of evidence becoming worthless? (I can’t think of any.)
You have a theory that a certain kind of building is highly prone to fire. You see a news report that mentions that a building of that kind has burnt down on Main Street. The news report supports your theory—unless you were a witness to the fire the previous night.
If you were promoting the theory before that point, the police may still have some pointed questions to ask you.
I’m talking about how valuable the evidence is to you, the theory-promoter. If you were there, then the news report tells you nothing you didn’t already know.
I understood your point. I was simply making a joke.
In this case, if the news report is consistent with my recollections, it seems that is evidence of the reliability of the news, and of the reliability of my memory, and additional evidence that the event actually occurred that way.
No?
Yeah, true. But having been there the previous night, and making good observations the previous night, certainly makes the news report go from pretty strong evidence to almost nothing.
EDIT: Really the important thing I think, is that if your observations are good enough than the evidence from the news report is “worthless”, in the sense that you shouldn’t pay to find out whether there was a news report that backs up your observations. It’s not worth the time it takes to hear it..
Hm.
Maybe I’m missing your point altogether, but it seems this is only true if the only thing I care about is the truth of that one theory of mine. If I also care about, for example, whether news reports are typically reliable, then suddenly the news report is worth a lot more.
But, sure, given that premise, I agree.
Suppose A gives me information about B, and B gives me information about C; they’re dependent. (Remember, probabilistic dependence is always mutual.) A gives me information about C (through B) only if I don’t know B. If I know B, then A is conditionally independent of C, and so learning A tells me nothing about C.
So essentially… a new fact is useless only if it’s a subset of knowledge you already have?
That seems like a fine way to put it.
Minor note, this appears to actually not be the case. Most studies have no correlation between race and penis size. See for example here. The only group that there may be some substantial difference is that Chinese babies may have smaller genitalia after birth but this doesn’t appear to hold over to a significant difference by the time the children have reached puberty. Relevant study.
Huh, according to this map the average Congolese penis is nearly twice as long as the average South Korean penis. (ISTR that stretched flaccid length doesn’t perfectly correlate with erect length.)
Oddly salient for such a trivial result. Should a study qualify for an Ig Nobel if you can use it to settle bar bets?
Where would someone like Steve Sailer fit in this classification?
Indeed as strange as it might sound (but not to those who know what he usually blogs about) Steve Sailer seems to genuinely like black people more than average and I wouldn’t be surprised at all if a test showed he wasn’t biased against them or was less biased than the average white American.
He also dosen’t seem like racist2 from the vast majority of his writing, painting him as racist3 is plain absurd.
What evidence leads to this conclusion?
He published his IAT results and he’s proposed policies that play to the strengths of blacks.
Historically, proposing policies that are set to help the specific strengths of a minority group are not generally indicative of actually positive feelings about those groups.
The IAT is the best measure of ‘genuinely like X people’ we have now, though that’s not saying much. (I believe the only place he published it is VDare, which is currently down.)
What are the competing hypotheses and competing observations, here?
...for a particular value of genuine. (See this, BTW.)
It seems to me the natural interpretation for “genuine” is “unconscious,” and if that post is relevant, it seems that it argues for more relative importance for the IAT over stated positions and opinions.
This is missing Racist4:
Someone whose preferences result in disparate impact.
...and also useful for those among us who don’t identify as “liberal-minded.”
Really? It does seem useful to communicate with the liberal-minded without feeling personally insulted or thinking they’re going way overboard on political correctness. But only liberals and those who think like them seem prone to thinking “Everyone is full of EVIL PREJUDICE except my tribe”.
When I saw this, I could not help but think what an apt demonstration it was of a green accusing the blues of holding a uniquely prejudiced point of view because they are blues, while he, being a green, is of course immune to any such sentiment.
Why is it that wherever I see “greens and blues” mapped to real-world politics, “green” are the liberals and “blue” are the conservatives? example.
EDIT: I misread your comment.
Are you saying that the demographic you are talking about is special in using prejudice as the marker of evilness (as opposed to religious affiliation or whatever), or in taking that sort of attitude at all?
Sort of the latter. Conservatives tend to think people evil for supporting things like gay marriage and abortion—things that all sides agree are supported by one side and opposed by the other. Or to think people fundamentally good, but naive and misguided—everyone agrees poverty is bad, but conservatives think food stamps make it worse, so they oppose liberals who support food stamps.
People who reject both labels seem to regard both conservatives and liberals as cute little bumbling fools who want to do good and thus deserve a pat on the head and a lollipop.
I haven’t spent nearly as much time in conservative circles as in liberal ones, but there is a distinctive pattern among liberals that I would not expect to observe anywhere else: “Let’s solve sexism by putting kittens in a blender!” “Putting kittens in a blender sounds like a bad idea.” “You evil sexist!”.
Leaving as untouched as I possibly can while still participating in this discussion at all the political labeling question here, I am interested in your thoughts as to the structural similarities and differences between the hypothetical conversation you cite about sexism, and a conversation like:
“Let’s make God happy by putting kittens in a blender!”
“Putting kittens in a blender sounds like a bad idea.”
”You evil atheist!”
or
“Let’s improve our capitalist economy by putting kittens in a blender!”
“Putting kittens in a blender sounds like a bad idea.”
”You evil communist!”
I’ve spent a lot of time on the conservative side (between the guns, being in the Military and working in/around the Defense Industry, and in general being a tradition oriented more-or-less libertarian) and many of them aren’t any different.
“Gay Marriage will ruin the institution” “Uh. How many times have you been divorced?” “COMMUNIST!” (no, not literally, but YKWIM)
Heck, even the Implicit Association Test assumes that if you’re “liberal” on Gun Control (whatever that means) you’re also Liberal on Gay Marriage and Abortion. Anyone wanna make some assumptions on the Implicit Associations of the writers of that test?
It certainly ruins some aspects. How will the state know which partner to favor in the divorce proceedings if both are the same sex?
The shorter one.
Being 1.6m, I support this decision.
EDIT: Take that, veil of ignorance!
Why not the cuter one?
That works too. A more serious answer here.
Good answer. Does it work that way in practice? I wouldn’t be able to predict whether the halo effect would overcome the sympathy influence and win out in effective total favoritism.
Beats me. I expect there’s a lot of noise here; I was more making a nod towards the standard trope than actually proposing an answer. “The one with less earning power” is also an answer that comes to mind.
If I had to guess, I’d guess that in most jurisdictions where same-sex divorce is no longer so novel as to be singular, the tendency would be to approximate splitting assets down the middle. But I’m no more than .35 confident of that, and even that much depends on a very ad-hoc definition of “no longer so novel.”
It is probably a very bad idea for me to make my first post in reply to something that is blatantly political, on a site which quite actively discourages it, but I’m not very rational. You see, I would probably consider myself more of a liberal than a conservative. I have even attended meetings of feminist organizations, which means that I am a very irrational type of bumbling fool. Nevertheless, I assure you that I would indeed question the ethics of putting kittens in blenders. I would also question the effectiveness of putting kittens in blenders as a means to solve sexism. However, I have never seen such a position proposed before and would be rather shocked to be called an “evil sexist”, even by radical feminists who I do not tend to agree with, for opposing the practice.
Perhaps everything you say is true. Perhaps there is something in liberals that makes us more tribal than the average human being. I would freely admit to being more irrational than rational most of the time. When someone not of my tribe says something I find horrific, my emotions tend to make me go “damn their entire tribe for only they would think such things”, rather than “I disagree with the point this individual is making, though I am sure it is not held by everyone else in his tribe and I am sure there are converse examples of people who have reached the same conclusion in my tribe”.
I see that the inferences you have drawn from your experience at a large number of liberal events and a large number of conservative events have led you to the conclusion that “ONLY liberals and those that think like them seem prone to thinking “everyone is full of evil prejudice except my tribe”. I would have thought that a statement of such strength, particularly since it uses the word ONLY, would require much more than the anecdotal experiences of one individual in order to justifiably reject the null hypothesis. Perhaps you have done many statistical studies on this that I am unaware of. Perhaps you have assumed knowledge of your studies is common among Less Wrong contributors (and I would admit that the average LW contributor is smarter than me, so it’s not too much of a stretch). Indeed, you may have constructed your priors in a completely impartial manner and may indeed be completely justified in assuming the truth of your alternative hypothesis. Nevertheless, I am a little skeptical of the reliability of the methods you used for arriving at the conclusion of attributing this quality to “ONLY liberals and those who think like them”,as opposed to “MOSTLY liberals and those who think like them”
Unsurprisingly, I have a number of issues with that sentence which are not just political. The set which includes “liberals and those who think like them” is not very well defined. I imagine a liberal thinks more like a conservative than a dog thinks like a liberal or a conservative. Consequently, your set could be defined to include everything within the set “conscious human beings”, as conscious human beings are certainly things which tend to think like other human beings. However, it is very clear from context that this is not what you mean. Do libertarians think like liberals? I imagine many libertarians would say “yes, on a lot of things, but not on many other things. On other things, I tend to think like a conservative”. but, clearly, your additional qualifier of “those who think like them” was included specify that you were not talking about only liberals. Do socialists think like liberals? I imagine a conservative would often say “yes, they do. They both tend to want more government intervention”. Conversely, I think a socialist might say “no, liberals believe in private ownership of the means of production. I believe that system is inherently unjust”. The vast majority of anarchists, as the forms of anarchism which have their origins in the labour movement, i.e. those advocating social anarchism are still the most common form of anarchism from a worldwide perspective. These anarchists would in fact see themselves as thinking more like orthodox Marxists than US conservatives. They would differ very strongly over the “statist” notion of the dictatorship of the proletariat, but would have similar long term ends. This puts the conservative who defines his conservatism as an ideology of “less government” in contrast to liberals and socialists in an odd position. You see, if he is not of a very extreme persuasion and is a believer in western democracy in its current form, it would probably be safe to say that he thinks more like liberals and democratic socialists than he thinks like a revolutionary social anarchist. So, defining who exactly thinks like a liberal, but is not actually a liberal is not an easy task. I believe there is a great deal of literature in linguistics and the philosophy of language dealing with the concept of “like” how difficult it actually is to categorize one thing as beinglike another thing. Trying to define an agent whichthinks like another agent seems, if anything, even more difficult.
Did you perhaps come up with a technical definition of for the set of people defined as “liberal or thinks like a liberal”. Did you create questionnaires with a number of propositions associated with the ideology “liberalism” and give them to people in the circles you mentioned, so that you could, to some extent, identify those who were of the set “think like liberals” in non-liberal groups. Perhaps you used a ratio of 13 positive answers to 20 negative answers as a minimum benchmark for those who “think like[ liberals]”. Were there questions on these sheets which were similar in form to “if you could stop sexism by putting kittens in a blender, would you put kittens in a blender?” and “in such circumstances, would you treat anyone stopping you from putting kittens in a blender as the enemy?”. If people in the “liberals and those who think like them group” did answer positively to both of those questions, I would be fairly surprised.
But maybe you have just let political hyperbole get in the way of presenting a potentially more persuasive argument. There is probably a good case to be made for comparatively stronger tribal sentiments in liberals. After all, individualism is a fundamental part of modern day conservatism, but is no longer considered a key component of liberalism. Now liberals are associated with more collectivist values. Consequently, it would not be surprising if studies showed that liberals had emotionally stronger collectivist tendencies than conservatives. Indeed, I think one could be justified in assuming a prior probability of greater than .5 that more collectivist tendencies would be found in liberals than in conservatives if we use the US definition of those terms.
In conclusion, if you had just said something along the lines of “In my own experience, individuals of a liberal political persuasion tend to have stronger views concerning moral judgment of their opponents. Has anyone else noticed this or am I the only one? If not, are there probable cognitive causes behind this”? At least that would have seemed more rational. It would have seemed more like something that belongs on Less Wrong. Presenting your argument in that form might have spared you some of that negative karma. If emotions were not getting in your way, maybe you would have noticed that your argument would seem out of place on this website, particularly when you decided to capitalize EVIL PREJUDICE. You might also have realized that when your accusation levied at a political group was questioned, you merely resorted to stronger hyperbole involving kittens in blenders. Your argument had become a soldier and you decided that you should try to save it by resorting to an argument that was even more absurd and hyperbolic.
I’ve looked at some of your previous contributions and you are clearly intelligent, so I don’t doubt that you probably had a valid point to make. You just could have made it better. You must have noticed that some of your statements just don’t fit the accepted rules of discourse on this site.
I never interpreted MixedNuts’ statement as entailing that liberals have stronger tribal sentiments. Rather, I interpreted it as being that accusing others of prejudice, and jumping on people who oppose proposed solutions to combat prejudice even if the solutions aren’t very good, are distinctly liberal tribal phenomena. A comparable tribal behavior that you would be likely to see among conservatives, but unlikely to see among liberals, would be accusing people of being “unpatriotic.”
Point taken. In hindsight I also seem to have gotten a bit carried away with the above post. I would, however, hold that there are many social/political/religious groups that have a remarkable tendency to see everyone except themselves as remarkably prejudiced because their worldview is not shared. Nevertheless, continuing down this road is not likely to be very productive.
I vote that we abandon ship and shift our attentions back to topics like rationality techniques, game theory, friendly AI and meta-ethics, where we can think more clearly.
Yeah, it was probably a bad idea, but damn I enjoyed reading it.
Attending to specificity and the sneaking in of connotations has benefits that are not limited to dealing with accusations of “EVIL PREJUDICE”.
So if a minority takes the Implicitly Association Test and finds out their biased against the dominant “race” in their area, are they a Racist1, or not?
I would also really question the validity of the Implicit Association Test. It says “Your data suggest a slight implicit preference for White People compared to Black People.”, which given that blacks have been severely under-represented my social sub-culture for the last 27 years(Punk/Goth), the school I graduated from (Art School), and my professional environments (IT) for the last 20 years is probably not inaccurate.
However, it also says “Your data suggest a slight implicit preference for Herman Cain compared to Barack Obama.” Which is nonsense. I have a STRONG preference for Herman Cain over Barack Obama.
Looks like we need more “racism”s :D A common definition of racism that reflects the intuitions you bring up is “racism is prejudice plus power,” (e.g., here) which isn’t very useful from a decision-making point of view but which is very useful when looking at this racism as a functional thing experienced by the some group.
Surely one of the definitions of “racist” should contain something about thinking that some races are better than others. Or is that covered under “neo-Nazi”?
I’m pretty sure that’s covered under Racist1. Note the word “negative”.
Though it’s odd that Racist1 specifically refers to “minorities”. The entire suite seems to miss folks that favor a “minority” race.
Not really it is perfectly possible to be explicitly aware of one’s racial preferences and not really be bothered by having such preferences, at least no more than one is bothered by liking salty food or green parks, yet not be a Nazi or prone to violence.
Indeed I think a good argument can be made not only that large number of such people lived in the 19th and 20th century, but that we probably have millions of them living today in say a place like Japan.
And that they are mostly pretty decent and ok people.
Edit: Sorry! I didn’t see the later comments already covering this. :)
Negative subconscious attitudes aren’t the same thing as (though they might cause or be caused by) conscious opinions that such-and-such people are inferior in some way.
Ah yes—it’s extra-weird that someone isn’t allowed in that framework to have conscious racist opinions but not be a jerk about it.
If one has conscious racist opinions, or is conscious that one has unconscious racist opinions (has taken the IAT but doesn’t explicitly believe negative things about blacks) but doesn’t act on them, it’s probably because one doesn’t endorse them. I’d class such a person as a Racist1.
I don’t think not being an “insensitive jerk” is the same as not acting on one’s opinions.
For example, if I think that people who can’t do math shouldn’t be programmers, and I make sure to screen applicants for math skills, that’s acting on my opinions. If I make fun of people with poor math skills for not being able to get high-paying programmer jobs, that’s being an insensitive jerk.
That’s true. I was taking “racist opinions” to mean “incorrect race-related beliefs that favor one group over another”. If people who couldn’t do math were just as good at programming as people who could, and you still screened applicants for math skills, that would be a jerk move. If your race- or gender- or whatever-group-related beliefs are true, and you act on them rationally (e.g. not discriminating with a hard filter when there’s only a small difference), then you aren’t being any kind of racist by my definition.
ETA: did anyone downvote for a reason other than LocustBeamGun’s?
Not to mention a bad business decision.
That too, thanks for pointing it out.
(ETA: I didn’t downvote, but) I wouldn’t call gender differences in math “small”—the genders have similar average skills but their variances are VERY different. As in, Emmy Noether versus ~everyone else.
And if there is a great difference between groups it would be more rational to apply strong filters (except for example people who are bad at math, conveniently, aren’t likely to become programmers). Perhaps the downvoter(s) thought you only presented the anti-discrimination side of the issue.
I think in most cases the average is more important in deciding how much to discriminate. But I deleted the relevant phrase because I’m not sure about that specific case and my argument holds about the same amount of water without it as with it.
EDIT:
Huh, I was intending to say that it’s acceptable to discriminate on real existing differences, to the extent that those differences exist. Not sure how to fix my comment to make that less ambiguous, so just saying it straight out here.
Indeed. For some reason I’m not sure of, I instinctively dislike Chinese people, but I don’t endorse this dislike and try to acting upon it as little as possible (except when seeking romantic partners—I think I do get to decide what criteria to use for that).
Can you expand on the difference you see between acting on your (non-endorsed) preferences in romantic partners, and acting on those preferences in, for example, friends?
As for this specific case, I don’t happen to have any Chinese friend at the moment, so I can’t.
More generally, see some of the comments on this Robin Hanson post: not many of them seem to agree with him.
I don’t understand how not having any Chinese friends at the moment precludes you from expanding on the differences between acting on your dislike of Chinese people when seeking romantic partners and acting on it in other areas of your life, such as maintaining friendships.
Yes, the commenters on that post mostly don’t agree with him.
That said, I would summarize most of the exchange as:
”Why are we OK with A, but we have a problem with B?”
″Because A is OK and B is wrong!”
Which isn’t quite as illuminating as I might have liked.
Since I’m not maintaining any friendships with Chinese people, I can’t see what it would even mean for me to act on my dislike of Chinese people in maintaining friendships. As for ‘other areas of my life’, this means that if I attempt to interact with a Chinese-looking beggar the same way I’d behave I’d interact with an European-looking beggar, to read a paper by an author with a Chinese-sounding name the same way I’d read one by an author with (say) a Polish-sounding name, and so on. (I suspect I might have misunderstood your question, though.)
Depends on what you mean by “better”. There’s a difference between taking the data on race and IQ seriously, and wanting to commit genocide.
(blink)
Can you unpack the relationship here between some available meaning of “better” and wanting to commit genocide?
That’s the question I was implicitly asking Oscar.
Most obvious plausible available meaning for ‘better’ that fits: “Most satisfies my average utilitarian values”.
(Yes, most brands of simple utilitarianism reduce to psychopathy—but since people still advocate them we can consider the meaning at least ‘available’.)
Fair enough.
Sure, I just thought it was weird that the definitions given barely even mentioned race.
You left out one common definition.
Also I don’t see why calling Obama the “Food Stamp President” or otherwise criticizing his economic policy president makes one a jerk, much less a “Racist2″ unless one already believes that all criticism of Obama is racist by definition.
I’m honestly confused. You don’t see why calling Obama a “Food Stamp President” is different from criticizing his economic policy?
I guess I would not predict that particular phrase being leveled against Hillary or Bill Clinton—even from people who disagreed with their economic policies for the same reasons they disagree with Obama’s economic policies.
Well, Bill Clinton had saner economic policies, but otherwise I would predict that phrase, or something similar, being used against a white politician.
You haven’t answered my question:
Given the way that public welfare codes for both “lazy” and “black” in the United States, do you think that “Food Stamp President” has the same implications as some other critique of Obama’s economic policies (in terms of whether the speaker intended to invoke Obama’s race and whether the speaker judges Obama differently than some other politician with substantially identical positions)?
“public welfare codes for both “lazy” and “black” in the United States”
Taking your word on that, what “other critique of Obama’s economic policies” are you imagining that would not have the same implications, unless you mean one that ignores public welfare entirely in favor of focusing on some other economic issue instead?
A political opponent of Obama might say:
or
or
edit: or
(end edit)
without me thinking that the political opponent was intending to invoke Obama’s race in some way. None of these are actual quotes, but I think they are coherent assertions that disagree with Obama’s economic or legal philosophy. Edit: I feel confident I could find actual quote of equivalent content.
Of course, none of the ones you suggested are actually about public welfare, in the sense of the government providing supplemental income for people who are unable to get jobs to provide themselves adequate income. So what we have is not a code word, but rather a code issue.
Except the first one, but with how you framed it as “public welfare codes for...” I don’t see how that one wouldn’t have the same connotations.
Tl;dr: You have a good point, but we seem to be stuck with the historical context.
Unemployment benefits might qualify as public welfare. More tenuously, the various health insurance subsidies and expansions of Medicaid (government health insurance for the very poor) contained in “Obamacare.”
But your point is well taken. The well has been poisoned by political talking points from the 1980s (e.g. welfare queen and the response from the left). I’ll agree that there’s no good reason for us to be trapped in the context from the past, but politicians have not tried very hard to escape that trap.
The term “welfare president” has the advantage of not having a huge inferential distance (how many people know what a Laffer curve is?) and working as a soundbite.
Here is another example of my point that one can claim any criticism of Obama is racist if one is sufficiently motivated.
Well, yes by finding enough “code words” you can make any criticism of Obama racist.
Yes, that’s certainly true.
I’m really curious now, though. What’s your opinion about the intended connotations of the phrase “food stamp President”? Do you think it’s intended primarily as a way of describing Obama’s economic policies? His commitment to preventing hunger? His fondness for individual welfare programs? Something else?
Or, if you think the intention varies depending on the user, what connotations do you think Gingrich intended to evoke with it?
Or, if you’re unwilling to speculate as to Gingrich’s motives, what connotations do you think it evokes in a typical resident of, say, Utah or North Dakota?
The direct meaning is reference to the fact that food stamp use has soured during his presidency. For generally, a reference to his governing style which includes anti-business policies and expanding entitlements.
I’m going to be charitable and assume that by “direct meaning” you mean to refer to the intended connotations that I asked about. Thanks for the answer.
That seems improbable. To pick the first example I Googled off of the Atlantic webside: Chart of the Day: Obama’s Epic Failure on Judicial Nominees contains some substantive criticism of Obama—can you show me where it contains “code words” of this kind?
It’s not an improbable claim so much as a nigh-unfalsifiable claim.
I mean, imagine the following conversation between two hypothetical people, arbitrarily labelled RZ and EN here:
EN: By finding enough “code words” you can make any criticism of Obama racist.
RZ: What about this criticism?
EN: By declaring “epic”, “confirmation mess”, and “death blow” to be racist “code words”, you can make that criticism racist.
RZ: But “epic”, “confirmation mess”, and “death blow” aren’t racist code words!
EN: Right. Neither is “food stamps”.
Of course, one way forward from this point is to taboo “code word”—for example, to predict that an IAT would find stronger associations between “food stamps” and black people than between “epic” and black people, but would not find stronger associations between “food stamps” and white people than between “epic” and white people.
I think “nigh-unfalsifiable” is unfair in general when it comes to the use of code words, but I’m not familiar with the facts of the particular case under discussion.
I agree in the general case.
In fact, I fully expect that (for example) an IAT would find stronger associations between “food stamps” and black people than between “epic” and black people, but would not find stronger associations between “food stamps” and white people than between “epic” and white people, and if I did not find that result I would have to seriously rethink my belief that “food stamps” is a dog-whistle in the particular case under discussion; it’s not unfalsifiable at all.
But I can’t figure out any way to falsify the claim that “by finding enough ‘code words’ you can make any criticism of Obama racist,” nor even the implied related claim that it’s equally easy to do so for all texts. Especially in the context of this discussion, where the experimental test isn’t actually available. All Eugene_Nier has to do is claim that arbitrarily selected words in the article you cite are equally racially charged, and claim—perhaps even sincerely—to detect no difference between the connotations of different words.
I wouldn’t actually use IAT to find these kind of connections—I would look at the use of phrases in other contexts by other people, and I would look at the reactions to the phrases in those contexts.
To take a historical example from Battle Cry of Freedom: The Civil War Era by James M. McPherson: in the 1862 riots against the draft, one of the banners that rioters carried read, “The Constitution As It Is, The Union As It Was”. That this allusion to the Constitution is an allusion to the legality of slavery under said Constitution is supported by one of the other banners carried by the same groups of rioters: “We won’t fight to free the nigger”. If, in 1862, a candidate for state office out in the Midwest were to repeat (or even, depending on the exact words, paraphrase) that phrase about the Constitution, I think the charge of “code word” would be well-placed.
I agree that looking at deployment of phrases is a useful way of finding code words, but it is always vulnerable to “cherry-picking.” The second banner you mentioned might or might not have been representative of the movement.
Consider the hypothetical protest filled with “Defend the Constitution, Strike Down Obamacare” posters, which should not be tainted by other posters saying “Keep government out of Medicare”(1) but it is hard to describe an ex ante principle explaining how distinctions should be made.
(1) For non-Americans: Medicare is widely popular government health insurance program for the elderly.
Agreed—it’s not a mechanical judgment.
Yup, looking at venues in which a phrase gets used is another way to establish likely connections between phrases and ideologies.
Unfortunately, it seems to me that most of the information that “race” provides is screened off by various things that are only weakly correlated with race, and it also seems to me that our badly-designed hardware doesn’t update very well upon learning these things. For example, “X is a college graduate, and is black” doesn’t tell you all that much more than “X is a college graduate”; it’s probably easier to deal with this by having inaccurate priors than by updating properly.
I’m not sure that what you have in mind here is screening, at least in the causal diagrams sense. If I’m not mistaken, learning that someone is a college graduate screens off race for the purpose of predicting the causal effects of college graduation, but it doesn’t screen off race for the purpose of predicting causes of college graduation (such as intelligence) and their effects. You’re right, though, that even in the latter case learning that someone is a college graduate decreases the size of the update from learning their race. (At least given realistic assumptions. If 99% of cyan people have IQ 80 and 1% have IQ 140, and 99% of magenta people have IQ 79 and 1% have IQ 240, learning that someone is a college graduate suddenly makes it much more informative to learn their race. But that’s not the world we live in; it’s just to illustrate the statistics.)
Which are generally much harder to observe.
Um, Affirmative Action. Also tail ends of distributions.
I was under the impression that AA applied to college admissions, and that college graduation is still entirely contingent on one’s performance. (Though I’ve heard tell that legacy students both get an AA-sized bump to admissions and tend to be graded on a much less harsh scale.)
Additionally, it seems that there’s a lot of ‘different justification, same conclusion’ with regards to claims about black people. For instance, “black people are inherently stupid and lazy” becomes “black people don’t have to meet the same standards for education”. The actual example I saw was that people subconsciously don’t like to hire black people (the Chicago resume study) because they present a risk of an EEOC lawsuit. (The annual risk of being involved in an EEOC lawsuit is on the order of one in a million.)
A quick google search isn’t giving me an actual percentage, but I believe that students who’re admitted to and attend college, but do not graduate, are still significantly in the minority. Even those who barely made it in mostly graduate, if not necessarily with good GPAs.
One of the criticisms of colleges engaging in “AA” type policies is that they often will put someone in a slightly higher level school (say Berkeley rather than Davis) than they really should be in and which because of their background they are unprepared for. Not necessarily intellectually—they could be very bright, but in terms of things like study skills and the like.
There is sufficient data to suggest this should be looked at more thoroughly. In general it is better for someone to graduate from a “lesser” school than to drop out of a better one.
Which policies were those again? Teetotalism, something to do with faith in a greater power, apologising to folks and, let’s see… 1,2,3… at least 9 others.
(ie. I put it that “AA” doesn’t work as a credible acronym. There are at least two far more obvious meanings for “AA policies” that must be ruled out before something to do with smart children gets considered as a hypothesis.)
I apologize. I was being lazy and assumed that since it was used multiple times above that folks following the conversation would get it from context. I didn’t realize that this conversation would so disquiet some people that they would get hung up on that, rather than addressing what many people think is a moderately serious problem, if not for society, then for the students who are basically being set up to fail.
But by all means let’s first have this silly little pissing match about not being able to track abbreviations through a conversation. It’s far more important.
No slight intended and I hope you’ll pardon my tangential reply. I know you weren’t the first to introduce the acronym.
Okay, but if not everyone graduates from college, and the point of admissions is to weed out people who’ll succeed in school rather than wasting everyone’s time, then how does a college degree mean anything different for a standard graduate, a legacy graduate, and an affirmative-action graduate? (Note that the bar is lowered for legacy graduates to the same degree as affirmative-action graduates, so if you don’t hear “my father also went here” the same way as “I got in partly because of my race”, then there’s a different factor at work here.)
In the extreme case where being above a given level of competence deterministically causes graduation, you’re correct and AA makes no difference; the likelihood (but not necessarily the prior or posterior probability) of different competence levels for a college graduate is independent of race. In the extreme case where graduation is completely random, you’re wrong and AA affects the evidence provided by graduation in the same way as it affects the evidence provided by admission. Reality is likely to be somewhere in between (I’m not saying it’s in the middle).
It depends on the actual distribution of legacy and AA graduates.
I’d say that the point of admissions is less to weed out people who’ll succeed from people who’ll waste the school’s time than to weed out people who’ll reflect poorly on the status of the school. Colleges raise their status by taking better students, so their interests are served not by taking students down to the lower limit of those who can meet academic requirements, but by being as selective as they can afford to be. Schools will even lie about the test scores of students they actually accept, among other things, to be seen as more selective.
I think it’s more a case same observations, different proposed mechanisms.
Has anyone ever claimed that any criticism of Obama is racist by definition? I only ever see this claim from people who want to raise the bar for racism above what they’ve been accused of. It’s not like targeting welfare to play on racism is a completely outlandish claim—I hope you’re familiar with Lee Atwater’s very famous description of the Southern Strategy:
No, they just declare each individual instance ‘racist’ no matter how tenuous the argument. The rather ludicrous attempts to dismiss the Tea Party as ‘racist’ being the most prominent example.
That’s the R2 way of phrasing R{1,2}, like “race traitor” is the R3 way of phrasing R1 or celandine’s phrasings are from an R1 perspective. (Not saying you are a jerk; just trying to separate out precisely such connotative differences from these useful clusters/concentric rings in peoplespace.)
(N.B. that if this definition wasn’t question-begging and/or indexical it would imply that iff accurate priors are equal over races then the genuinely colorblind are racists.)
Possibly, I couldn’t quite figure out Mixed Nuts’ definitions because he seemed to be implicitly assuming that accurate priors were equal over races.
Well they aren’t. Nevertheless, I should probably have said something more like:
Apart from race, isn’t this a problem with English or language in general? We use the same words for varying degrees of a certain notion, and people cherry pick the definitions that they want to cogitate for response. If I call someone a conservative, is it a compliment or an insult? That depends on both of our perceptions of the word conservative as well as our outlook on ourselves as political beings; however, beyond that, I could mean to say that the person is fiscally conservative, but as the current conservative candidates are showing conservatism to be far-right extremism, the person may think, “Hey! I’m not one of those guys.”
I think if someone wants to argue with you, you’d be hard-pressed to speak eloquently enough to provide an impenetrable phrase that does not open itself to a spectrum of interpretation.
Sure. “Conservative” isn’t a fixed political position. Quite often, it’s a claim about one’s political position: that it stands for some historical good or tradition. A “conservative” in Russia might look back to the good old days of Stalin whereas a “conservative” in the U.S. would not appreciate the comparison. It’s also a flag color; your “fiscal conservative” may merely not want to wave a flag of the same color as Rick Santorum’s.
What about a “Racist4”, someone who assign different moral values to people of different races all other things being equal?
Based on a couple interviews I’ve seen with unabashed Racist3s, I think that they would tend to fulfill that criterion.
Edit: Requesting clarification for downvote?
That would be a paleo-nazi. Not many of them around, anymore, and those that are don’t get away with much.
Why make up a new word? Paleoconservatives and smarter white nationalists (think Jared Taylor ) seem to often fit the bill.
Depends if the differences in assigned moral values are large enough they can easily approach Nazi pretty quickly. As a thought experiment consider how many dolphins would you kill to save a single person?
Marvin Minsky
--Nietzsche
“The mind commands the body and it obeys. The mind orders itself and meets resistance. ”
-St Augustine of Hippo
Augustine has obviously never tried to learn something which requires complicated movement, or at least he didn’t try it as an adult.
The general principle is: cached is fast, cache-populating is slow. This goes for mind and “body” both, because the body does as its told, but it needs telling in a lot of detail and the control signals need to be discovered. Most people, for both mind and body, learn enough control signals for day-to-day use, and stop.
I do somewhat wonder what it would be like to know the control signals for all my muscles, Bene Gesserit style.
Vladimir Vasiliev is a Bene Gesserit, at least for skeletal muscle. Unfortunately, I can’t locate any of the videos that really demonstrate this on youtube; but it makes him able to do some strange-looking things very effectively.
I’m reasonably sure that the important thing is awareness of muscles in systems appropriate for movement [1] rather than as individuals. Herbert had a good intution there, but Feldenkrais is a real world method of improving movement. Also take a look at Eric Franklin’s books on practical anatomy.
[1] That’s approximate phrasing for an approximate idea.
It may be a matter of the mind having to first order itself to give the body the correct commands.
That seems fair, but on the other hand, it seems that a primary way of the mind acquiring the order it needs is to start by giving the body commands that the body doesn’t follow.
-
-George Orwell
Sadly, there’s no need of any adjective before “Politics” here. It’s a fully general statement.
You may be able to delete the words on either side of the adjective as well.
G. K. Chesterton
Zach Wiener’s elegant disproof:
(Although to be fair, it’s possible that the disproof fails because “think of the strangest thing that’s true” is impossible for a human brain.)
It also fails in the case where the strangest thing that’s true is an infinite number of monkeys dressed as Hitler. Then adding one doesn’t change it.
More to the point, the comparison is more about typical fiction, rather than ad hoc fictional scenarios. There are very few fictional works with monkeys dressed as Hitler.
Indeed, I posted this quote partially out of annoyance at a certain type of analysis I kept seeing in the MoR threads. Namely, person X benefited from the way event Y turned out; therefore, person X was behind event Y. After all, thinking like this about real life will quickly turn one into a tin-foil-hat-wearing conspiracy theorist.
Yes but in real life the major players don’t have the ability to time travel, read minds, become invisible, manipulate probability etcetera, these make complex plans far more plausible than they would be in the real world. (That and conservation of detail.)
In real life the major players are immune to mindreading, can communicate securely and instantaneously worldwide, and have tens of thousands of people working under them. You are, ironically, overlooking the strangeness of reality.
Conservation of detail may be a valid argument though.
Conservation of detail is one of the memetic hazards of reading too much fiction.
Which is exactly what MoR tells us to do to analyze it, is it not?
That’s still not a reason for assuming everyone is running perfect gambit roulettes.
You can say that with a straight face after the last few chapters of plotting?
Yes, I was referring to the theories that Dumbledore sabotaged Snape’s relationship with Lilly so that the boy-who-lived (who hadn’t even been born then) would have the experience of being bullied by his potions master.
Depends on the infinity. Ordinal infinities change when you add one to them.
If we’re restricting ourselves to actual published fiction, I present Cory Doctorow’s Someone Comes to Town, Someone Leaves Town. The protagonist’s parents are a mountain and a washing machine, it gets weirder from there, and the whole thing is played completely straight.
Depends on which end you add one at. :-)
(I mention this not because I think there’s any danger Ezekiel doesn’t know it, but just because it might pique someone’s curiosity.)
[comment deleted]
This quote seems relevant:
G. H. Hardy, upon receiving a letter containing mathematical formulae from Ramanujan
Doesn’t work if (n + 1) monkeys dressed as Hitler are no stranger than n monkeys dressed as Hitler, and n monkeys dressed as Hitler are true.
Eliezer’s unconventional definition of “strange” is occasionally annoying.
Strange I would almost accept. But in this case the quote is ‘unusual’… that’s even worse! Unusual fits squarely into the realm of ‘actually happens’.
Also:
I was originally going to post that one, but decided to go with Chesterton’s version since it better explains what is meant. (At the expense of loosing some of the snappiness.)
“Reality is the thing that surpises me.”—Paraphrase of EY
Paul Halmos
-Tim Ferriss, The 4-Hour Workweek
-- Peter Drucker
(I’ve quoted this line several times before.)
Sure there is. Doing inefficiently what should not be done at all is even more useless. At least if you do it efficiently you can go ahead and do something else sooner.
It seems to me that efficiency is just as useful doing things that should not be done as it is other times, for a fixed amount of doing stuff that shouldn’t be done.
Depends on the kind of efficiency, I guess.
If someone is systematically murdering people for an hour, I’d prefer they not get as much murdering done as they could.
I did specify “for a fixed amount of doing stuff that shouldn’t be done”. If they are getting more murdering done, that is probably bad.
-Robert Kurzban, Why Everyone (Else) is a Hypocrite: Evolution and the Modular Mind
Natalie Reed, Getting Skeptics to Think Rationally About Their Skepticism
Upvoted because I like Natalie Reed, but this is way too long. The key sentence seems to be
Thanks. I didn’t wanna post this much, but I was rather too attached to the passage to cut anything else out. Helps to have other eyes.
— Jack Vance, The Languages of Pao
Shorter version:
-- Terence, Phormio
My favorite:
Daniel Kahneman, Thinking, Fast and Slow
If you know the scores of two different golfers on day 1, then you know more than if you know the score of only one golfer on day 1. You can’t predict the direction in which regression to the mean will occur if your data set is a single point.
The following all have different answers:
(The answer is 39700; I’m probably not going to improve with practice, and you have no way to know if 39700 is unusually good or unusually bad.)
(The answer is some number less than 39700; knowing that my friend got a lower score gives you a reason to believe that 39700 might be higher than normal.)
(The answer is some number higher than 39700, because I’m no longer an absolute beginner.)
True, a single data point can’t give you knowledge of regression effects. In the context of the original problem, Kahneman assumed that you had access to the average score of all the golfers on the first day.
I’m not sure it’s true that the answer is higher than 39700, in this case. It depends on if you have knowledge of how people generally improve, and if your score is higher than average for an absolute beginner. Since unknown factors could adjust the score either up or down, I would probably just guess that it will be the same the next day.
The existence of factors which could adjust the score either up or down does not indicate which factors dominate. In this case, you have no information which suggests that 39700 is either above or below the median, and therefore these two cases must be assigned equal probability—canceling out any “regression to the mean” effects you could have predicted. Similar arguments apply to other effects which change the score.
So you estimate “regression to the mean” effects as zero, and base your estimate on any other effects you know about and how strong you think they are. That makes sense. Thanks for the correction!
Not quite, you have some background information about the range of scores video games usually employ.
And, I suppose, information about the probability of people mentioning average scores. I concede that either factor could justify arguing that the score should decrease.
It reminds me of E.T. Jaynes’ explanation of why time-reversible dynamic laws for (say) sugar molecules in water lead to a time-irreversible diffusion equation.
-Game of Thrones (TV show)
-- Isuna Hasekura, Spice and Wolf vol. 5 (“servant” is justified by the medieval setting).
I don’t get it.
Short explanation: the person that knows why a thing must be done is generally the person who decides what must be done. Application to rationality: instrumental rationality is a method that serves goals. The part that values and the part that implements are distinct. (Also, you can see the separation of terminal and instrumental values.)
And explains why businessmen keep more of the money than the random techies they hire.
Would “servant” not otherwise be justified?
It’s fairly benign, but looks a little archaic—not so archaic that it’d have to be medieval, though. The rest of the phrasing is fairly modern, or I’d probably have assumed it was a quote from anywhere from the Enlightenment up to the Edwardian period. It has the ring of something a Victorian aphorist might say.
I think the quote should start with, “he WHO knows...”.
--Jonathan Haidt, source
He also talks about how sacredness is one of the fundamental values for human communities, and how liberal/left-leaning theorists don’t pay enough attention to it (and refuse to acknowledge their own sacred/profane areas).
I have more to say about his values theory, I’ll post some thoughts later.
UPD: I wrote a little something, now I’m just gonna ask Konkvistador whether he thinks it’s neutral enough or too political for LW.
Please make sure you do. I suspect it will be interesting. :)
I first encountered this in a physics newsgroup, after some crank was taking some toy model way too seriously:
Thaddeus Stout Tom Davidson
(I remembered something like “if you pull them too much, they break down”, actually...)
My old physics professor David Newton (yes, apparently that’s the name he was born with) on how to study physics.
--Some AI Koans, collected by ESR
My physics teacher is always sure to clarify which parts of a problem are physics and which are math. Physics is usually the part that allows you to set up the math.
-- Mark Rippetoe, Starting Strength
Sample: men who come to this guy to get stronger, I assume?
Hmm. This sort of thing seems plausible, but I wonder how much of it is strength-specific? I’ve heard of eudaimonic effects for exercise in general (not necessarily strength training) and for mastering any new skill, and I doubt he’s filtering those out properly.
Why was this downvoted?
He’s ignoring that people might not like how larger muscles look.
And personally (though I don’t care much) I would only care about practical athletic ability, not weight lifting.
I understand this line of thought, but.. strength doesn’t have to be developed through weights, strength increase doesn’t necessarily mean much hypertrophy, and most importantly strength is a prerequisite/accelerator for increasing pretty much all athletic abilities (power, flexibility, endurance..)
I guess the relation between muscle mass and physical attractiveness is non-monotonic, so a marginal increase in muscle mass would make some people look marginally better and other people look marginally worse. (I suspect the median Internet user is in the former group, though.)
ETA: Judging from the picture on Wikipedia, Rippetoe himself looks like someone who would look better if he lost some weight (but I’m a heterosexual male, so my judgement might be inaccurate).
I’m somewhat annoyed that the comments on this thread are vapid, but this might be worth responding to. It doesn’t particularly matter whether or not Rippetoe is himself currently ripped—see this Wikipedia article of yours for his domain expert credentials:
Secondly, notice that he was a competitive powerlifter thirty years ago. Senescence is a bitch.
Why “of yours”? I’ve never edited it.
I didn’t dispute them. The grandparent and great-grandparent are about “how larger muscles look”. I can’t see how the passage you quote is relevant to the fact that I think he’s ugly.
Yoshinori Kitase
Context: Aeris dies. (Spoilers!)
It would be interesting to calculate the total utility of an author wantonly murdering a universally beloved character. May turn out to be quite a crime...
Well, it’s certainly not limited to killing off characters, but people have been writing about emotional release as a response to tragedy in drama for quite a long time. Generally it’s thought of as a good thing, if not necessarily a pleasant one, and I’m inclined to agree with this analysis; people go into fiction looking for an emotional response, and the enduring popularity of tragic storytelling suggests that they aren’t exclusively looking for emotions generally regarded as positive.
Content warnings pointing to what a work’s going for might not be a bad idea from a utilitarian standpoint, though. I personally handle tragedy well, for example, but I have a lot of trouble with cringe comedy.
I’ve had to leave the room because I get embarrassed just watching characters in that kind of show...
Well, one of my favorite authors is infamous for doing this, and I for one think his works are the better for it. It certainly hasn’t prevented them from becoming very popular.
Upvoted, for having the exact same thought as I did when reading the parent post.
Maybe you were both primed by gRR’s username.
-- Christina Rossetti, Who has seen the Wind?
Interestingly enough, this is my friend’s parents response when asked why they believe in an invisible god. I suppose they haven’t considered that the leaves and trees may be messed up enough to shake of their own accord.
Interesting.
It is rather unlikely that Christina Rossetti intended this to be a rationalist quote in a sense we would identify with. I do read it as an argument for scientific realism and belief in the implied invisible, but it seems likely that she was merely being poetic or that she was making a pro-religion argument, given her background. Of course the beauty of this system is that if someone quotes this to you as an argument for God (or anything), you can ask them what the leaves and trees are for their wind and thus get at their true argument.
Furthermore, the context in which I first read it is the video game Braid, juvpu cerfragrq vg va gur pbagrkg bs gur chefhvg bs fpvrapr. I would highly recommend this game, by the way.
Hey! It’s Super Mario with built in cheat modes!
Could you rot13 the word fpvrapr in the last paragraph? For me, finally getting the meaning of the princess at the end was such a beautiful realization that I wouldn’t like to spoil it for others…
(I highly recommend the game too. In fact, I’ve already bought it several times – once for me, and as a gift for others.)
Done and agreed. I am ashamed to admit it that I first played it from a pirated copy—I later bought it, and I intend to buy Jonathan Blow’s next game The Witness when it comes out. But I still feel bad about pirating it...
I love that game, it’s been a while since I played it though.
I third the recommendation.
A shortcut for making less-biased predictions, taking base averages into account.
Regarding this problem: “Julie is currently a senior in a state university. She read fluently when she was four years old. What is her grade point average (GPA)?”
Daniel Kahneman, Thinking, Fast and Slow
The Last Psychiatrist (screen name, otherwise anonymous) in a response to a critique of a book, regarding how we define psychiatric terms.
--Razib Khan, source
-- Farenheit 451
I’ll be sticking around a while, although I’m not doing too well right now (check the HPMOR discussion thread for those of you interested in viewing the carnage, it’s beautiful). It’s not really a rationality problem, but I need to learn how to deal with other people who have big egos, because apparently only two or three people received my comments the way I meant them to come across. Plus, I like the idea of losing so much karma in one day and then eventually earning it all back and being recognized as a super rationalist. Gaining the legitimate approval of a group who now have a lot against me will be a decent challenge.
Also I doubt that I would be able to resist commenting even if I wanted to. That’s probably mostly it.
Tips for dealing with people with big egos:
Don’t insult anyone, ever. If Wagner posts, either say “Hmm, why do you believe Mendelssohn’s music to be derivative?” or silently downvote, but don’t call him an antisemitic piece of shit.
Attributing negative motivations (disliking you, wanting to win a debate, being prejudiced) counts as an insult.
Attributing any kind of motivation at all is pretty likely to count as an insult. You can ask about motivation, but only list positive or neutral ones or make it an open question.
Likewise, you can ask why you were downvoted. This very often gets people to upvote you again if they were wrong to downvote you (and if not, you get the information you want). Any further implication that they were wrong is an insult.
Stick closely to the question and do not involve the personalities of debaters.
Exception to the above: it’s okay to pass judgement on a personality trait if it’s a compliment. If you can’t always avoid insulting people, occasionally complimenting them can help.
A lot of things are insults. You will slip up. This won’t make people dislike you.
If you know what a polite and friendly tone is, have one.
If someone isn’t polite and friendly, it means you need to be more polite and friendly.
If they’re being very rude and mean and it’s getting annoying, you can gently mention it. Still make the rest of your post polite and friendly and about the question.
If the “polite and about the question” part is empty, don’t post.
If you have insulted someone in a thread—either more than once, or once and people are still hostile despite you being extra nice afterwards—people will keep being hostile in the thread and you should probably walk away from it.
If hostility in a thread is leaking into your mood, walk away from the whole site for a little while.
When you post in another thread, people will not hold any grudges against you from previous threads. Sorry for your epic quest, but we don’t have much against you right now.
Apologies (rather than silence) are a good idea if you were clearly in the wrong and not overly tempted to add “but”.
On politeness:
Some politeness norms are stupid and harmful and wrong, like “You must not criticize even if explicitly asked to” or “Disagreement is impolite”. Fortunately, we don’t have these here.
Some are good, like not insulting people. Insulting messages get across poorly. This happens even when people ignore the insult to answer the substance, because the message is overloaded.
Some are mostly local communication protocols that help but can be costly to constrain your message around. It’s okay to drop them if you can’t bear the cost.
Some are about fostering personal liking between people. They’re worthwhile to people who want that and noise to people who don’t.
Taking pains to be polite is training wheels. People who are good with words can say precisely and concisely what they mean in a completely neutral tone. People who aren’t are injecting lots of accidental interpersonal content, so we need to make it harmless explicitly.
People who are exempted:
The aforementioned people, who will never accidentally insult anyone;
People whose contribution is so incredibly awesome that it compensates for being insufferable; I know of a few but none on LessWrong;
wedrifid, who is somehow capable of pleasant interaction while being a complete jerk.
I’ll add to this that actually paying attention to wedrifid is instructive here.
My own interpretation of wedrifid’s behavior is that mostly s/he ignores all of these ad-hoc rules in favor of:
1) paying attention to the status implications of what’s going on,
2) correctly recognizing that attempts to lower someone’s status are attacks
3) honoring the obligations of implicit social alliances when an ally is attacked
I endorse this and have been trying to get better about #3 myself.
Might be too advanced for someone who just learned that saying “Please stop being stupid.” is a bad idea.
Sure. Then again, if you’d only intended that for chaosmosis’ benefit, I assume you’d have PMed it.
Well… I’ve seen people nearly that exact phrase to great effect at times… But that’s not the sort of thing you’d want to include in a ‘basics’ list either.
Just as with fashion, it is best to follow the rules until you understand the rules well enough to know exactly how they work and why a particular exception applies!
The phrase “social alliances” makes me uneasy with the fear that if everyone did #3, LW would degenerate into typical green vs blue debates. Can you explain a bit more why you endorse it?
If Sam and I are engaged in some activity A, and Pat comes along and punishes Sam for A or otherwise interferes with Sam’s ability to engage in A...
...if on reflection I endorse A, then I endorse interfering with Pat and aiding Sam, for several reasons: it results in more A, it keeps me from feeling like a coward and a hypocrite, and I establish myself as a reliable ally. I consider that one of the obligations of social alliance.
...if on reflection I reject A, then I endorse discussing the matter with Sam in private. Ideally we come to agreement on the matter, and either it changes to case 1, or I step up alongside Sam and we take the resulting social status hit of acknowledging our error together. This, too, I consider one of the obligations of social alliance.
...if on reflection I reject A and I can’t come to agreement with Sam, I endorse acknowledging that I’ve unilaterally dissolved the aspect of our social alliance that was mediated by A. (Also, I take that status hit all by myself, but that’s beside the point here.)
I agree with you that if I instead skip the reflective step and reflexively endorse A, that quickly degenerates into pure tribal warfare. But the failure in this case is not in respecting the alliance, it’s failing to reflect on whether I endorse A. If I do neither, then the community doesn’t degenerate into tribal warfare, it degenerates into chaos.
Admittedly, chaos can be more fun, but I don’t really endorse it.
All of that said, I do recognize that explicitly talking about “social alliances” (and, indeed, explicitly talking about social status at all) is a somewhat distracting thing to do, and doesn’t help me make myself understood especially well to most audiences. It was kind of a self-indulgent comment, in retrospect, although an accurate one (IMO).
(I feel vaguely like Will_Newsome, now. I wonder if that’s a good thing.)
Start to worry if you begin to feel morally obliged to engage in activity ‘Z’ that neither you, Sam or Pat endorse but which you must support due to acausal social allegiance with Bink mediated by the demon X(A/N)th, who is responsible for UFOs, for the illusion of stars that we see in the sky and also divinely inspired the Bhagavad-Gita.
Been there, done that. (Not specifically. It would be creepy if you’d gotten the specifics right.)
I blame the stroke, though.
Battling your way to sanity against corrupted hardware has the potential makings of a fascinating story.
It wasn’t quite as dramatic as you make it sound, but it was certainly fascinating to live through.
The general case is here.
The specifics… hm.
I remain uncomfortable discussing the specifics in public.
Is establishing yourself as a reliable ally an instrumental or terminal goal for you? If the former, what advantages does it bring in a group blog / discussion forum like this one? The kind of alliance you’ve mentioned so far are temporary ones formed implicitly by engaging someone in discussion, but people will discuss things with you if they think your comments are interesting, with virtually no consideration for how reliable you are as an ally. Are you hoping to establish other kinds of alliances here?
Instrumental.
Trust, mostly. Which is itself an instrumental goal, of course, but the set of advantages that being trusted provides in a discussion is so ramified I don’t know how I could begin to itemize it.
To pick one that came up recently, though, here’s a discussion of one of the advantages of trust in a forum like this one, related to trolley problems and similar hypotheticals.
Another one that comes up far more often is other people’s willingness to assume, when I say things that have both a sensible and a nonsensical interpretation, that I mean the former.
Yes, I agree that when people form implicit alliances by (for example) engaging someone in discussion, they typically give virtually no explicit consideration for how reliable I am as an ally.
If you mean to say further that it doesn’t affect them at all, I mostly disagree, but I suspect that at this point it might be useful to Taboo “ally.”
People’s estimation of how reliable I am as a person to engage in discussion with, for example, certainly does influence their willingness to engage me in discussion. And vice-versa. There are plenty of people I mostly don’t engage in discussion, because I no longer trust that they will engage reliably.
Not that I can think of, but honestly this question bewilders me, so it’s possible that you’re asking about something I’m not even considering. What kind of alliances do you have in mind?
It’s not clear to me that these attributes are strongly (or even positively) correlated with willingness to “stick up” for a conversation partner, since typically this behavioral tendency has more to do with whether a person is socially aggressive or timid. So by doing that, you’re mostly signaling that you’re not timid, with “being a good discussion partner” a much weaker inference, if people think in that direction at all. (This is the impression I have of wedrifid, for example.)
I didn’t have any specific kind of alliances in mind, but just thought the question might be worth asking. Now that I think about it, it might be for example that you’re looking to make real-life friends, or contacts for advancing your career, or hoping to be recruit by SIAI.
This model of the world does an injustice to a class of people I hold in high esteem (those who are willing to defend others against certain types of social aggression even at cost to themselves) and doesn’t seem to be a very accurate description of reality. A lot of information—and information I consider important at that—can be gained about a person simply by seeing who they choose to defend in which circumstances. Sure, excessive ‘timidity’ can serve to suppress this kind of behavior and so information can be gleaned about social confidence and assertiveness by seeing how freely they intervene. But to take this to the extreme of saying you are mostly signalling that you’re not timid seems to be a mistake.
In my own experience—from back when I was timid in the extreme—the sort of “sticking up for”, jumping to the defense against (unfair or undesirable) aggression is one thing that could break me out of my shell. To say that my defiance of my nature at that time was really just me being not timid after all would be to make a lie of the battle of rather significant opposing forces within the mind of that former self.
Merely that I am bold and that my behavioral tendencies and strategies in this kind of area are just signals of that boldness? Dave’s model seems far more accurate and useful in this case.
I find that my brain doesn’t automatically build detailed models of LW participants, even the most prominent ones like yourself, and I haven’t found a strong reason to do so consciously, using explicit reasoning, except when I engage in discussion with someone, and even then I only try to model the part of their mind most relevant to the discussion at hand.
I realize that I may be engaging in typical mind fallacy in thinking that most other people are probably like me in this regard. If I am, I’d be curious to find out.
Fair enough; it may be that I overestimate the value of what I’m calling trust here.
Just for my own clarity, when you say that what I’m doing is signaling my lack of timidity, are you referring to my actual behavior on this site, or are you referring to the behavior we’ve been discussing on this thread (or are they equivalent)?
I’m not especially looking to make real-life friends, though there are folks here who I wouldn’t mind getting to know in real life. Ditto work contacts. I have no interest in working for SI.
I was talking about the abstract behavior that we were discussing.
I really like your illustration here. To the extent that this is what you were trying to convey by “3)” in your analysis of wedrifid’s style then I endorse it. I wouldn’t have used the “alliances” description since that could be interpreted in a far more specific and less desirable way (like how Wei is framing it). But now that you have unpacked your thinking here I’m happy with it as a simple model.
Note that depending on the context there are times where I would approve of various combinations of support or opposition to each of “Sam”, “Pat” and “A”. In particular there are many behaviors “A” that the execution of will immediately place the victim of said behavior into the role of “ally that I am obliged to support”.
Yeah, agreed about the distracting phrasing. I find it’s a useful way for me to think about it, as it brings into sharp relief the associated obligations for mutual support, which I otherwise tend to obfuscate, but talking about it that way tends to evoke social resistance.
Agreed that there are many other scenarios in addition to the three I cite, and the specifics vary; transient alliances in a multi-agent system can get complicated.
Also, if you have an articulable model of how you make those judgments I’d be interested, especially if it uses more socially acceptable language than mine does.
Edit: Also, I’m really curious as to the reasoning of whoever downvoted that. I commit to preserving that person’s anonymity if they PM me about their reasoning.
For what it is worth, sampling over time suggests multiple people—at one point there were multiple upvotes.
I’m somewhat less curious. I just assumed it people from the ‘green’ social alliance acting to oppose the suggestion that people acting out the obligations of social allegiance is a desirable and necessary mechanism by which a community preserves that which is desired and prevents chaos.
Regardless of whether or not this is compatible with being a “complete jerk” in your sense, I wish to point out that wedrifid is in many respects an exemplary Less Wrong commenter. There are few others I can think of who are simultaneously as (1) informative, including about their own brain state, (2) rational, especially in the sense of being willing and able to disagree within factions/alliances and agree across them, and (3) socially clueful, in the sense of being aware of the unspoken interpersonal implications of all discourse and putting in the necessary work to manage these implications in a way compatible with one’s other goals (naturally the methods used are community-specific but that is more than good enough).
In saying this, I don’t know whether I’m expanding on your point or disagreeing with it.
I would be interested in having wedrifid write a post systematically explaining his philosophy of how to participate on LW, because the bits and pieces of it that I’ve seen so far (your comment, TheOtherDave’s, this comment by wedrifid) are not really forming into a coherent whole for me.
That would be an interesting thing to do, too. It is on the list of posts that I may or may not get around to writing!
I appreciate your kind words komponisto! You inspire me to live up to them.
This discussion is off-topic for the “Rationality Quotes” thread, but...
If you’re interested in an easy way to gain karma, you might want to try an experimental method I’ve been kicking around:
Take an article from Wikipedia on a bias that we don’t have an article about yet. Wikipedia has a list of cognitive biases. Write a top-level post about that bias, with appropriate use of references. Write it in a similar style to Eliezer’s more straightforward posts on a bias, examples first.
My prediction is that such an article, if well-written, should gain about +40 votes; about +80 if it contains useful actionable material.
No, I want this to be harder than that. It needs to be a drawn out and painful and embarrassing process.
Maybe I’ll eventually write something like that. Not yet.
Oh, you want a Quest, not a goal. :-)
In that case, try writing an article that says exactly the opposite of something that somebody with very high (>10,000) karma says, even linking to their statement to make the contrast clear. Bonus points if you end up getting into a civil conversation directly with that person in the comments of your article.
Note: I believe that it is not only possible, but even easy, for you to do this and get a net karma gain. All you need is (a) a fairly good argument, and (b) a friendly tone.
I nominate this as the Less Wrong Summer Challenge, for everybody.
(One modification I’d make: it shouldn’t necessarily be the exact opposite: precisely reversed intelligence usually is stupidity. But your thesis should be mutually incompatible with any charitable interpretation of the original claim.)
And now I realize I just did exactly that, and your prediction is absolutely correct. No bonus points for me, though.
You just need a reasonably friendly tone. I have a bunch of karma, and I haven’t posted any articles yet (though I’m working on it).
Indeed, that would work if karma was merely the goal. But chaosmosis expressed a desire for a “painful and embarrasing process”, meaning that the ante and risk must be higher.
That actually sounds fun now that you put it like that!
One day I will write “How to karmawhore with LessWrong comments” if I can work out how to do it in such a way that it won’t get −5000 within an hour.
I know how you could do it. You need to come up with a detailed written strategy for maximizing karma with minimal actual contribution. Have some third party (or several) that LW would trust hold on to it in secrect.
Then, for a week or two, apply that strategy as directly and blatantly as you think you can get away with, racking up as many points as possible.
Once that’s done, compile a list of those comments and post it into an article, along with your original strategy document and the verification from the third party that you wrote the strategy before you wrote the comments, rather than ad-hocing a “strategy” onto a run of comments that happened to succeed.
Voila: you have now pulled a karma hack and then afterwards gone white-hat with the exploit data. LW will have no choice but to give you more karma for kindly revealing the vulnerability in their system! Excellent. >:-)
Nitpick: cryptography solves this much more neatly.
Of course, people could accuse you of having an efficient way of factorising numbers, but if you do karma is going to be the least of anyone’s concerns.
Factorization doesn’t enter into it—to precommit to a message that you will later reveal publically, publish a hash of the (salted) message.
But somewhat less transparently. The cryptographic solution still requires that an encrypted message is made public prior to the actions being taken and declaring an encrypted prediction has side effects. The neat solution is to still use trusted parties but give the trusted parties only the encrypted strategy (or a hash thereof).
What kind of side effects ? I have no formal training in cryptography, so please forgive me if this is a naive question.
I mean you still have to give the encrypted data to someone. They can’t tell what it is but they can see you are up to something. So you still have to use some additional sort of trust mechanism if you don’t want the act of giving encrypted fore-notice to influence behavior.
Ah ok, that makes sense. In this case, you can employ steganography. For example, you could publish an unrelated article using a pretty image as a header. When the time comes, you reveal the algorithm and password required in order to extract your secret message from the image.
Better yet… embed five different predictions in that header. When the time comes, reveal just the one that turned out most correct!
Hmm yes, there might be a hidden weakness in my master plan as far as accountability is concerned :-)
None that were not extant in the original scheme, assuming there are at least five people on LW who’d be considered trusted parties.
But of four people on LW who would be considered trusted parties, what’s the probability that all four would be quiet after the fifth is called upon to post the prediction or prediction hash?
You’re right, of course. I didn’t think that through. There haven’t been any good “gain the habit of really thinking things through” exercises for a Skill-of-the-Week post, have there?
Bear in mind that it’s often not worth the effort. I think the skill to train would be recognizing when it might be.
Besides, in the prediction-hash case, they may well not post right away.
“Recognizing when you’ve actually thought thoroughly” is the specific failure mode I’m thinking of; but that’s probably highly correlated with recognizing when to start thinking thoroughly.
I feel like such a skill may be difficult to consciously train without a tutor:
-- @afoolswisdom
Yes, the first thing I thought of was Quirrel’s hashed prediction; but it doesn’t seem that everyone’s forgotten yet, as of last month.
My actual strategy was just to post lots. Going through the sequences provided a target-rich environment ;-)
IME, per-comment EV is way higher in the HP:MoR discussion threads.
It so is. Karmawhoring in those is easy.
This suggests measuring posts for comment EV.
Now that is an interesting concept. I like where this subthread is going.
Interesting comparisons to other systems involving currency come to mind.
EV-analysis is the more intellectually interesting proposition, but it has me thinking. Next up: black-market karma services. I will facilitate karma-parties… for a nominal (karma) fee, of course. If you want to maintain the pretense of legitimacy, we will need to do some karma-laundering, ensuring that your posts appear that they could be worth the amount of karma they have received. Sock-puppet accounts to provide awful arguments that you can quickly demolish? Karma mines. And then, we begin to sell LW karma for Bitcoins, and--
...okay, perhaps some sleep is in order first.
It is clear we need to start work on a distributed, decentralised, cryptographically-secure Internet karma mechanism.
Create a dozen sockpuppet accounts and use them to upvote every single one of your posts. Duh.
That’s like getting a black belt in karate by buying one from the martial arts shop. It isn’t karmawhoring unless you’re getting karma from real people who really thought your comments worth upvoting.
“Getting karma from real people who really thought your comments worth upvoting” sounds like a good thing, so why the (apparently) derogatory term karmawhoring?
It is good to have one’s comments favourably appreciated by real people. Chasing after that appreciation, not so much. Especially, per an ancestor comment, trying to achieve that proxy measure of value while minimizing the actual value of what you are posting. The analogy with prostitution is close, although one difference is that the prostitute’s reward—money—is of some actual use.
Not as straightforward as it sounds. Irrelevant one-sentence comments upvoted to +10 will attract more downvotes than they would otherwise.
This would indeed count as “minimal contribution”, but still sounds like a lot of work...
This is actually a really worthwhile skill to learn, independently of any LW-related foolishness. And it is actually a rationality problem.
You mean to the extent that any problem at all is a rationality problem, or something else?
It’s a bias, as far as I’m concerned, and something that needs to be overcome. People with egos can be right, but if one can’t deal with the fact that they’re either right or wrong regardless of their egotism, then one is that much slower to update.
Dealing with others’ irrationality is very much a rationality problem.
Ignore this.
It is what we would call an “instrumental rationality” problem. And one of the most important ones at that. Right up there with learning how to deal with our own big egos… which you seem to be taking steps towards now!
And I thought I was the only one getting pummeled here...
UPDATE: Lame quest was lame. I’m already back up to positive karma although I hit −100 a couple days ago.
Maybe I should try for −1000 next time, instead.
Some users don’t read the HP:MoR threads, and some users only read the HP:MoR threads. You don’t have to feel like you have a reputation here yet. Also, welcome to Less Wrong.
Has anybody ever considered moving the HP:MoR threads to another site?
There are threads on other sites (the TVTropes one is the biggest, I think, but I know the xkcd forums have a thread, and I’m sure others do as well). Part of the value of having HP:MoR threads here is it makes it likely that people who come here for the MoR threads will stay for the rest of the site- but I agree that the karma on them is atypical for karma on the site, and decoupling it would have some value (but I suspect higher costs than value).
As I mentioned elsewhere, it would have the effect of making http://lesswrong.com/r/discussion/topcomments/ more useful (for people who don’t read HP:MoR, such as me).
--1943 Disney cartoon
Aaron Sloman
-- David Henderson on Social Darwinism
Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi
Eric–Jan Wagenmakers, Ruud Wetzels, Denny Borsboom, & Han van der Maas
I don’t see why the first hypothesis should necessarily be rejected out of hand. If the supposed mechanism is unconscious then having it react to erotic pictures and not particular casino objects seems perfectly plausible. Obviously the real explanation might be that the data wasn’t strong enough to prove the claim, but we shouldn’t allow the low status of “psi theories” to distort our judgement.
One good thing about Bayesian reasoning is that assigning a prior belief very close to zero isn’t rejecting the hypothesis out of hand. The posterior belief will be updated by evidence (if any can be found). And even if you start with a high prior probability and update it with Bem’s evidence for precognition, you would soon have a posterior probability much closer to zero than your prior :)
BTW there is no supposed mechanism for precognition. Just calling it “unconscious” doesn’t render it any more plausible that we have a sense that would be super useful if only it even worked well enough to be measured, and yet unlike all our other senses, it hasn’t been acted on by natural selection to improve. Sounds like special pleading to me.
FiftyTwo wasn’t arguing that the sense was plausible. He was conditioning on the assumption that the sense exists.
OK, point taken. However, there being no proposed mechanism for precognition, it can hardly be called “plausible” that it operates inconsistently and that the experiment just happened to pick one of the things it can do out of all possibilities.
After all, if nobody knows how it’s supposed to work, how does the experimenter justify claiming his data as evidence for precognition rather than quantum pornotanglement? You could say I just made that up on the spot. It doesn’t matter: precognition isn’t necessarily a thing either.
How exactly does “quantum pornotanglement” and why doesn’t it count as a type/mechanism for precognition.
Now I’m thinking of pin-up Feynman diagrams.
(Does Rule 34 apply?)
Analogously, if someone told me they had a magic rock that could pick up certain pieces metal and not others, and couldn’t explain why. it might be they are wrong it can pick up any metals, or there may be an underling effect causing these observations that we don’t understand. In the analogy magnetism can be observed long before its is understood, and why some metals are and aren’t magnetic isn’t a trivial problem.
Similarly it may be that some psychic phenomena exists which works for some things, and not for others, for reasons we’re not aware of. The fact we can’t fully explain why it works in some cases but not others doesn’t mean we should outlaw evidence of the cases where it does.
I would at least expect them to be able to demonstrate their magic rock and let me try it out on various materials.
If they had a rock that they claimed could pick up copper but not brass, based on only one experiment, but the rock now doesn’t work if any scientists are watching, I’d be disinclined to privilege their hypothesis of the rock’s magic properties.
Nobody is outlawing the evidence. I’m saying the evidence is unconvincing, and far short of what is needed to support an extraordinary claim such as precognition. It is for example much less rigorous than the evidence there was for another causality-violating hypothesis: FTL neutrinos. That turned out to be due to an equipment defect. Many were disappointed but nobody was surprised. Same reference class if you ask me.
Chinese proverb, meaning “the onlooker sees things more clearly”, or literally, “the player lost, the spectator clear”
Chinese proverb, “three men make a tiger”, referring to a semi-mythological event during the Warring States period:
-- Wikipedia
In personal development workshops, the saying is, “the one with the mike in their hand is the last to see it.” Of doctors and lawyers it is said that one who treats himself, or acts in court for himself, has a fool for a client.
-- Scott Locklin
Cryonics?
I’m curious. Were you agreeing with the quote (and thus dissing cryonics), disagreeing with the quote (and bringing cryonics as a counterexample), or genuinely without agenda?
Partly that second one, partly just curious if it was an intended subject.
The original context is that Scott Locklin is a nanotechnology skeptic.
Follow the link, he explains it there.
Manifestly stupid.
-Biutiful
Rasmus Eide aka. Armok_GoB.
PS. This is not taken from an LW/OB post.
Everything needs to be taken both seriously and not-seriously. Tepid unreflective semi-seriousness is always a mistake.
--Samuel Johnson, The Adventurer, #119, December 25, 1753.
-A Weak Hadith of the Prophet Muhammad
Merlin, Sign of Chaos
T. S. Eliot
I’ve read this a few times, but I’m still not seeing anything except “Non-believers are dummies, ha!”, and I wonder if that’s all there is to it or if I’m just getting blocked by my “oh-crap-what-did-he-say-about-my-tribe?” alarms going off.
I may very well reading what I want to read out of this quote, but I feel like if the quote is to be taken as a jab at non-believers, it’s also a jab at believers. The “ordinary man claiming to be a skeptic” part is explicit, but note that before he claims most are incapable of both much doubt and much faith, which I think implies that the same issue goes for believers and non-skeptics.
The basic idea I’m pulling from the quote seems to be that most people won’t critically think about their ideas, so you can’t always trust another’s self-labeling to decide if their beliefs have been well thought out.
Consider “The majority of this liquid is not water”.
--Francis Bacon, Novum Organum (1620)
-Elizabeth Barrett Browning, Aurora Leigh, 1856
Martin Gardner, The Annotated Alice
Leaving aside the dubiousness of calling the way the universe actually works “nonsense” and “mad”: It seems very, very, very unlikely that anything in Lewis Carroll’s writings was a metaphor for quantum mechanics. He died in 1898.
(I suppose something can be used as a metaphor for quantum mechanics without having been intended as one, though.)
The heck? Quantum fields are completely lawful and sane. Only the higher levels of organization, i.e. human beings, are bugfuck crazy.
Behold, the Copenhagen Interpretation causes BRAIN DAMAGE.
As natural as QFT seems today, my understanding is that in 1960, before many of the classic texts in the domain were published, the ideas still seemed quite strange. We would do well to remember that when we set out to search for other truths which we do not yet grasp.
:p
Maybe, but the Big World idea causes much more severe damage, judging by the recent discussions here and elsewhere.
What’s Martin complaining about, exactly? That goodness is nowhere in physical law, so things can be unfair and horrible for no reason? That goodness is reducible in the first place? That physics is hard and therefore deserves nasty words like “absurd”?
Lewis Carroll was religious, and to add to that, he was human.
These threads would be very sparsely populated if we avoided quoting humans.
You have misrepresented me. I was refuting the bit where a human was said to be doing something “rationally and without illusion”: chances are that doesn’t happen (especially regarding a topic as broad as “life”).
Upvoted for dry wit.
Is fiction permitted? Most of my favorite quote are not from ‘humans’.
For that matter, so was Martin Gardner.
Danny Hillis
Can you please explain this, slowly and carefully? It sounds plausible, and I’m trying to improve my understanding of space-time / 4-D thinking.
When analysing a circuit we normally consider a wire to have the same voltage along its entire length. (There are two problems with this: voltage changes only propagate at c, and the wire has a resistance. Normally these are both negligible.) Thus we can view wires as taking a voltage and spreading it out along a line in space.
On the other hand, memory locations take a voltage and spread it out through time. So they are in some sense a wire pointing in the time direction.
Sadly, the analogy doesn’t quite hold up. Wires have one spatial dimension but also have a temporal dimension (i.e. wires exist for more than an instant). So if you rotated a wire so that its spatial dimension pointed along the temporal dimension, its temporal dimension would rotate down into one of the spatial dimensions. It would still look like a wire! A memory location has no spatial extent: they’re a very small bit of metal (you could make one in the shape of a wire but people don’t). Thus they have a temporal extent but no spatial extent. So if you rotated one you could get something that had a spatial extent but no temporal extent. This would look like a piece of wire that appeared for an instant and then disappeared again.
Amazing! So a stricter analogy might be a memory location and a lightning bolt—the memory location occupies only a tiny amount of space, and the static discharge of lightning takes only a tiny amount of time.
Ponder only the one dimensional time for now. At every point of time, you have only this moment and nothing more. But with the memories, you have same previous moments cached. Stored somewhere “orthogonal” to the timeline.
I’ve heard it here: http://edge.org/conversation/a-universe-of-self-replicating-code
On a site even better than this and quite unpopular on this site, also. Read or watch Dyson there. As many others.
Is Edge the more unpopular site, or are you thinking of someplace else?
For what it’s worth, I don’t have anything against Edge, I just get bored reading it, even when the question is something I’m interested in.
Robert Brault
Am I the only one who didn’t realize before reading other comments that he was not claiming to have been converted by his nostrils?
Particularly interesting since I (and, I suspect, others on LW) usually attach positive affect to the word “skeptic”, since it seems to us that naivete is the more common error. But of course a Creationist is sceptical of evolution.
(Apparently both spellings are correct. I’ve learned something today.)
I’d call creatonists “evolution deniers” before I’d call them “evolution skeptics”, but I suppose they’d do the same to me with God...
I must be misinterpreting this, because it appears to say “religion is obvious if you just open your eyes.” How is that a rationality quote?
LW’s standards for rationality quotes vary, but in any case this does allow for the reading of endorsing allowing perceived evidence to override pre-existing beliefs, if one ignores the standard connotations of “skeptic” and “missionary”.
I guess, but that seems like a strange interpretation seeing as the speaker says he’s no longer “a skeptic” in general.
The point of rationality isn’t to better argue against beliefs you consider wrong but to change your existing beliefs to be more correct.
That’s a good reminder but I’m not sure how it applies here.
A quote that calls the holder of a potentially wrong belief a “skeptic” rather than a “believer” is more useful since it makes you more likely to identify with him.
Also judging from his other quotes I’m pretty sure that’s not what he meant...
--Alan Belkin From the Stock Market to Music, via the Theory of Evolution
This was just the first bit that stood out as LW-relevant; he also briefly mentions cognitive bias and touches on the possible benefits of cognitive science to the arts.
Bruce Sterling
Human beings have been designed by evolution to be good pattern matchers, and to trust the patterns they find; as a corollary their intuition about probability is abysmal. Lotteries and Las Vegas wouldn’t function if it weren’t so.
-Mark Rosenfelder (http://zompist.com/chance.htm)
-- Bjork
--Oswald Spengler, The Decline of the West
That sounds deep, but it has nothing to to with rationality
Not really, for example it is actually pretty clearly connected to fun theory.
“An organized mind is a disciplined mind. And a disciplined mind is a powerful mind.”
-- Batman (Batman the Brave and the Bold)
That doesn’t seem to follow. An organized mind may not be disciplined. It may even be obsessively organized at the expense of being disciplined.
Assuming the mind is human, then I suppose you might have to modify it to ever make it truly organized, but identifying and organizing one’s thoughts is an important part of rationality. You cannot make any effort to organize your thoughts without a certain degree of discipline. Think of the martial arts metaphor people here keep using in regards to rationality.
I expect there is a correlation between degree of organisation, degree of discipline and measures of a minds’ ‘power’. But this relationship is definitely not one of a series of “is a”.
To be honest I try not to. That kind of thinking seems to lead to “koans”, which seem to be a name for saying things that are blatantly false but feeling deep while doing so because there is some loosely related not-false lesson that someone could conceivably deconstruct from the koan.
So says a man-dressed-like-a-bat.
(That’s not a jibe aimed at the quote but rather a reference to this.)
Downvoted because this comment serves only to propagate a mildly-entertaining meme, rather than contributing to the discussion in some way.
In recent years, I’ve come to think of myself as something of a magician, and my specialty is pulling the wool over my own eyes.
--Kip W
Civil wars are bitter because
---Thucydides
Found here.
Andrew Hussie
Is there a reason all the b’s have been replaced by 8′s?
Character typing quirk in the original.
The typing quirks actually serve a purpose in the comic. Almost all communication among the characters takes place through chat logs, so the system provides a handy way to visually distinguish who’s speaking. They also reinforce each character’s personality and thematic associations—for example, the character quoted above (Aranea) is associated with spiders, arachnids in general, and the zodiac sign of Scorpio.
Unfortunately, all that is irrelevant in the context of a Rationality Quote.
You’re right, never mind. Still internalizing the new set of ancestors.
I hate to downvote Homestuck, but there I go, downvoting it. The typing quirks and chatlog-style layout are too specific to the comic.
Every time someone mentions Homestuck I resist (until now) posting this image macro.
I spent a few minutes reading Homestuck from the beginning, but it did not grab me at all. Is there a better place to start, or is it probably just not my cup of tea?
(Speaking of webcomics, I have a similar question about Dresden Codak.)
It starts pretty slow. Most of the really impressive bits, to my taste, don’t start happening until well into act 4, but that’s a few thousand (mostly single-panel, but still) pages of story to go through; unless you have a great deal of free time, I wouldn’t hold it against you if you decided it’s not for you by the end of act 2. Alternately, you might consider reading act 5.1 and going back if you like it; that’s a largely independent and much more compressed storyline, although you’ll lose some of the impact if you don’t have the referents in the earlier parts of the story to compare against. You’ll need to front-load a lot of tolerance for idiosyncratic typing that way, though.
Which brings me to quotes like MHD’s: for quotation out of context, I would definitely have edited out the typing quirks (or ed8ed, if we’re being cute). The quirks are more about characterization than content, and some of the characters are almost unreadable without a lot of practice.
Dresden Codak, incidentally, doesn’t have this excuse. If you’ve read a couple dozen pages of that and didn’t like it, you’re probably not going to like the rest.
I’ve never been sure exactly where and how to get into the Dresden Codak storyline; but the one-offs like Caveman Science and the epistemological RPG are some of my favorite things on the internet.
The first real “storyline” Dresden Codak comic can be found here, That said, a lot of people I’ve spoken with simply don’t like the Dresden Codak storyline in any form, and prefer the funny one-offs to any of the continuity-oriented comics.
A couple dozen pages of Dresden Codak is almost a third of the entire thing...
Perhaps it’s just me, but I think it’s sufficiently short that the naïve strategy (start at the beginning, click next until you get to the end) would work in this case.
(Incidentally, when you get to Hob #9, remember to read the description at the bottom of the page.)
I disagree with Nornagest: I think the best place to start is at the beginning. They pretty much had me at “fetch modus”, I was hooked from then on. A lot of really inspirational things start to happen later on, f.ex. the Flash animation “[S] WV: Ascend”, but it might be difficult to comprehend without reading the earlier parts.
I would also advise starting at the beginning because I’m starting to grow dissatisfied with the double-meta-reacharaound tack that the comic is taking now… The earlier chapters had a much more coherent story, IMO.
-C. Mackay, Extraordinary Popular Delusions and the Madness of Crowds, 1852.
-- Trey Parker, Jewpacabra
(This is at about five minutes fifty seconds into the episode.)
Edit: Related Sequence post.
-Sister Y
This quote argues for a position, which is why I think it currently sits ugly at 0 karma after having sat ugly at 1 for a while, but I think, inseparable from the position being argued for, it espouses an important general principle which one should not simply ignore because it can apply to one’s preconception; indeed (applying its lesson) that is precisely when we need the principle most.
So while I would have just taken the general principle out from Sister Y’s post if it were possible for me to do so (and taken the mediocre three to four karma I would have gotten for it), I’m glad that it was intertwined now, as it shows that yes, you’re supposed to apply the principle to even this (substitute anything for “this”, of course).
I do sincerely wonder what the world would look like if people could even-handedly apply lessons from quotes. There are many lessons here.
Edit: Actually, looking closely at what the words actually say, I realize it doesn’t, by itself, argue for the position that the former value is better than the latter value, but its context is an argument for said thing.
Edit2: If you look at the sort of quote in the original Rationality Quotes posts that were entirely Eliezer’s collection, they were mostly of the sort that were likely to make you think about something rather than something that is easy to agree with. A desire to return to that model could be what’s motivating the comment you’re reading.
In brief, you presented a quote (1) with a controversial position, (2) little LessWrong consensus, (3) no obvious relationship to generalized improvement at achieving goals, and (4) no relationship to the ideal scientific method. You are surprised (or disappointed) that it got negligible karma attention.
I notice I am confused.
Definitely not surprised. (Edit: okay, now I’m a little surprised. The quote has now been voted up to +4. My little discussion was convincing? I don’t know!) Maybe moderately disappointed. I think there’s a lot to be said for the meta level of “continue to search, and not just put on a show of searching, for where you’re wrong, even if you’ve already done this many times.” I’m a little more disappointed that the highest-voted quotes tend to be applause lights. (Though not always) (also, applause lights are not inherently bad things, but I wish they didn’t get the most karma).
(1) Visibility—people who missed the quote the first time saw our exchange on the side bar.
(2) I am also confused by the purpose of the rationality quotes page. It’s not surprising to me that lack of consensus limits upvote potential (i.e. local applause lights get voted up). That said, applause lights are grounded in particular communities. “I like human rights” is an applause light in the United States, but is a provocative position in North Korea. Some of the upvoting is based on the wish that the quote was more widely accepted in general society (i.e. we wish society was more like us)
(3) Notwithstanding what I just said, Rationality Quotes seems to function as a ideological purity tester. If it gets upvoted here, that shows it is part of the local consensus. In other words, I could post quotes that I thought were both post-modern and rationalist, and I expect they would be downvoted as outside the mainstream. To the extent that you think LessWrong has dysfunctional groupthink, I’m not sure the fight can be won in Rationality Quotes as opposed to Open Thread or Discussion. (I aspire to aspire to post into Main, so I seldom think about the social norms of that type of posting).
(4) In a substantive response to your quote, LessWrong is surprisingly child-free-living in its attitude. Even controlling for age, socioeconomic status, and gender, we are not even vaguely representative of how frequently people desire to have children.
I’m curious. Did you say “aspire to aspire to post into Main” deliberately?
T. S. Eliot, The Rock
Leonid: Without a purpose, a man is nothing.
Newton: Yes. But we wonder...do you share our gift? Do you have the necessary vision? Do you know the final fate of man?
Leonid: How could anyone know things like that?
Council: The Greater Science. The Quiet Math. The Silent Truth. The Hidden Arts. The Secret Alchemy.
Newton: Every question has an answer. Every equation has a solution.
S.H.I.E.L.D. #1 (Jonathan Hickman)
The point of this one isn’t clear.
I guess it probably should have been broken up into a couple of shorter ones, but it was a single, short exchange and I just couldn’t resist. That the question of the final fate of man, can, like any question, be answered with a greater science, with the hidden arts… this is essentially the message of transhumanist rationality, and it was beautifully phraseds here. “Without a purpose, a man is nothing”… this really should have been off on its own, in retrospect, but its meaning is a little bit less obscure, I think.
Isn’t one of the implications of Gödel’s incompleteness theorem that there will always be unanswerable questions?
Only if the questioner is consistent.
And there’s no way to tell whether the questioner is inconsistent, or there exist unanswerable questions, right? [In any case, I would be greatly astonished if “What is the final fate of man?” was found to be isomorphic to a human Godel sentence ;-) ]
> “The penalty of not doing philosophy isn’t to transcend it, but simply to give bad philosophical arguments a free pass.”
David Pearce
David Pearce “>www.reddit.com/r/Transhuman/comments/r7dui/david_pearce_ama/c43jfmk)
“Dear, my soul is grey With poring over the long sum of ill; So much for vice, so much for discontent… Coherent in statistical despairs With such a total of distracted life, To see it down in figures on a page, Plain, silent, clear, as God sees through the earth The sense of all the graves, - that’s terrible For one who is not God, and cannot right The wrong he looks on. May I choose indeed But vow away my years, my means, my aims, Among the helpers, if there’s any help In such a social strait? The common blood That swings along my veins, is strong enough To draw me to this duty.”
Elizabeth Barrett Browning, Aurora Leigh, 1856
-Thomas Huxley
I’ve traditionally gone with: the board is the space of/for potentially-live hypotheses/arguments/considerations, pieces are facts/observations/common-knowledge-arguments, moves are new arguments, the rules are the rules of epistemology. This lets you bring in other metaphors: ideally your pieces (facts/common-knowledge-arguments) should be overprotected (supported by other facts/common-knowledge-arguments); you should watch out for zwichenzugs (arguments that redeem other arguments that it would otherwise be justified to ignore); tactics/combinations (good arguments or combinations of arguments) flow from strategy/positioning (taking care in advance to marshal your arguments); controlling the center (the key factual issues/hypotheses at stake) is important; tactics (good arguments) often require the coordination of functionally diverse pieces (facts/common-knowledge-arguments), and so on.
The subskills that I use to play chess overlap a lot with the subskills I use to discover truth. E.g., the subskill of thinking “if I move here, then he moves there, then I move there, then he moves there, …” and thinking through the best possible arguments at each point rather than just giving up or assuming he’ll do something I’d find useful, i.e. avoiding motivated stopping and motivated continuation, is a subskill I use constantly and find very important. I constantly see people only thinking one or two moves (arguments) ahead, and in the absence of objective feedback this leads to them repeatedly being overconfident in bad moves (bad arguments) that only seem good if you’re not very experienced at chess (argumentation in the epistemic sense).
Oh, a rationality quote: Bill Hartson: “Chess doesn’t make sane people crazy; it keeps crazy people sane.”
And Bobby Fischer: “My opponents make good moves too. Sometimes I don’t take these things into consideration.”
Johan Liebert, Monster
Dick Teresi, The Undead
--Joseph Conrad, Heart of Darkness
T. S. Eliot, Murder in the Cathedral
— Poe, The Purloined Letter
-- John McCarthy
Repeat
I’m starting to feel it was a mistake to have so many of those threads instead of a single one.
A single thread would have been of unmanageable size.
In what sense unmanageable? What would it make harder to do that is easy to do now?
It seems to me the current setup makes it harder to know if you’re posting a repeat, or to display a list of all top quotes.
Also, I think it leads to more barrel-scraping this way; it seems to me that for the most part we ran out of the really great quotes and now often things get posted that have no special rationality lesson, but instead appeal to the tastes and specific beliefs common in our particular community.
Unmanageable because the site software doesn’t show more than 500 (top-level?) comments, and because large numbers of comments load more slowly.
There’s a way to find top-voted quotes—Best of Rationality Quotes 2009/2010 (Warning: 750kB page, 774 quotes). This could be considered a hint about the quantity problem.
There is another one for 2011.
As for dupes, the search on the site is adequate for finding them—what’s needed is a recommendation on the quotes page for people to check before posting.
I think the quotes continue to be somewhat interesting, but it’s not so much that there are no great ones left (though I was surprised to discover recently that “Nature to be commanded must be obeyed” hadn’t been listed) as that they tend to keep hitting the same points.
I see. Thank you.
It seems to me that there’s room for improvement to the software, then. However, I’ll shut up at this point.
You’re welcome.
There’s always room for improvement in the software. Once in a while, there’s a request for suggestions, so you might want to think about the changes you’d like to see.
To my mind, the redundancy problem with the quotes pages isn’t so much repeated quotes as different quotes which mean pretty much the same thing.
How many different things are there to say about rationality?
Well, the right question is “How many different brief things are there to say about rationality?”
If you’re allowed to go on at length, the sequences imply that there’s quite a bit to say.
I don’t think the question about brief statements has an a priori answer.
Thanks for asking about unmanageablility.
That fits neatly with the importance of being specific.
I had enough experience with the site to know that very long threads don’t work well and to have a feeling for the quote threads adding up to a huge lump, but I had it in my mind as one chunk and didn’t realize that if you suggested a single quote thread, it was worth considering that you didn’t have my background knowledge.
— Waiting for God (TV Series)
Is there a point to this quote, besides that this diana character doesn’t understand the term ‘moral dilemma’?
That the kind of “moral dilemmas” philosophers tend to contemplate, tend to be very different to the kind of dilemmas people encounter in practice.
Perhaps that it requires significant time and cognitive energy to make difficult decisions in general or reflectively modify one’s moral system in particular?
ETA: can someone explain the downvote?
What is a man? A miserable little pile of secrets. (0:43 – 0:48)
--- Dracula
“What is a man? A miserable little pile of replicators!” “What is a man? A miserable little pile of thermostats!”
Dupe, oddly enough.
Maybe this song won’t get downvoted? It’s a little more on-topic for LessWrong, even if it does get political at the end. ;)
-- Pete Seeger, “Waist Deep in the Big Muddy”
Quick question: Is this getting downvoted because of the quote or because I talked about downvoting?
(The song itself is a rather amusing lesson in escalation of commitment and sunk cost fallacy, among other things...)
It’s too long. This thread is about quotes, not about making others read a whole piece of work you like. Perhaps use the monthly media thread for that purpose?
For this thread you could have perhaps reduced the quotable to this:
or perhaps possibly even two verses would be acceptable like this:
and just linked to some other page where one could see the whole song.
But not the whole damn thing.
Thanks.
If I downvoted this comment but not the song would that count or not?
How can I tell the difference? (I assume that you mean downvoting the song on Youtube?)
Lolz. I think he meant “downvoted this comment” where “this” means “the comment he was quoting” as opposed to the other comment which contained the song.
T. S. Eliot, Murder in the Cathedral
Correct me if I’m wrong, but does this seem like an affirmation of religious morality and denouncement of consequentialism? I’m failing to see the rationality here.
He probably means that in a Big World of certain kinds, he thinks, TDT/UDT leads to unpopular conclusions. E.g. that we should believe in all deities who punish disbelief if they exist in some possible world.
This seems close to the reason I rejected the mathematical macrocosm hypothesis, even before Someone Who’s Probably Not Will Newsome explained part of his position. If Tegmark IV either does not constrain anticipation, or calls you a Boltzmann Brain equivalent, then it fails as an explanation. And upon close inspection, I don’t think I reject Boltzmann just by applying decision theory. It seems logically absurd to say, “I am likely a Boltzmann Brain, but acting like a real boy has more expected value.” The first clause means I shouldn’t trust my reasoning and should likely just think happy thoughts. I think the best theory of reality will say a random (real) mind would likely benefit from rationality, to at least the extent that we appear to benefit.
Rationality in: Recognition of timeless/timeful distinction (Law of God, Law of Man), Emphasizing timeless effects even when they’re heavily discountable, Pointing out that history tends to make fools of the temporally good, Touching on the touchy theme of consent, Proposing arguments about when it is or is not justified to take-into-account or ignore the arguments of others who seem to be acting in good faith.
Also, even a simple counter-affirmation to local ideology is itself useful if it’s sufficiently eloquently-stated.
(Pretty drunk, apologies for any errors.)
You mean the part where you equate ‘timeless’ considerations with the Law of God?
Conditional on the existence of a Law of God (and the sort of god in whom Eliot believed) that’s not so very unreasonable. It’s worth distinguishing between “irrational” and “rational but based on prior assumptions I find very improbable”.
(None the less, I don’t think there’s much rationality in the lines Will_Newsome quoted, though it does gesticulate in the general direction of an important difficulty with consequentialism: a given action has a lot of consequences and sorting out the net effect is difficult-to-impossible; so we have to make do with a bunch of heuristic approximations to consequentialism. I’ll still take that over a bunch of heuristic approximations to the law of a probably-nonexistent god, any day.)
Wait, it explicitly says that his decision (if you call that “decision” to which his whole being gives entire consent) to give his life to the Law of God should (and is to) be taken timelessly (“out of time”). …I don’t see how that’s not clear. Most of the time when people complain about equivocation/syncretism it’s because the (alleged) meaning is implicit or hidden one layer down, but that’s not the case here.
That’s definitely not an error. Have you read much T. S. Eliot? He was obsessed with the timeful/timeless local/global distinction. Read Four Quartets.
I wasn’t trying to imply you misrepresented T.S.Eliot’s obsession. Just that you make an error in advocating it as an example of a “Rationality Quote”. Because it’s drivel.
0_o
/sigh...
What is the empirical difference between a person who is temporally vs timelessly good?
-- from “The Greatest Love of All”, Music by Michael Masser, Lyrics by Lynda Creed
First performed by George Benson
Edit: truncated at Alicorn’s suggestion.
I think this is a rare pop song that successfully treats a somewhat abstract idea.
Among others, of course! Benson’s is good, though, and I had not heard it before. Moar George Benson (with the McCoy Tyner trio).
Benson did it first. ;)
(http://www.youtube.com/watch?v=U2qLu1CYBf4)
-- Billy Joel
Your song lyrics might be better received if you truncated them somewhere and possibly included an explanation.
Well, if people want to downvote good advice because it’s in song form, that’s not my problem. ;)
It’s not like I have anything to worry about. I’ve got over seven thousand karma right now, so it’s not like my posting privileges are in danger or anything. Downvote away!
(I’ll cut down the other one, though.)
Karma is information, not money.
Actually some on lesswrong try to make bets with it as currency.
You shouldn’t use the fact you have given clear and useful comments in the past excuse you from giving equally good ones now.
What I meant to say was that, well, I think it’s a good Rationality Quote, even if a lot of other people don’t.
Why do you think it’s a good rationality quote?