Rationality Quotes April 2014
Another month has passed and here is a new rationality quotes thread. The usual rules are:
Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you’d like to revive an old quote from one of those sources, please do so here.
No more than 5 quotes per person per monthly thread, please.
And one new rule:
Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
The mathematician and Fields medalist Vladimir Voevodsky on using automated proof assistants in mathematics:
[...]
[...]
[...]
[...]
From a March 26, 2014 talk. Slides available here.
A video of the whole talk is available here.
And his textbook on the new univalent foundations of mathematics in homotopy type theory is here.
It is misleading to attribute that book solely to Voevodsky.
Yes. But it’s forgiveably misleading to attribute it non-exclusively to him, in a thread of comments which was started about him.
Computer scientists seem much more ready to adopt the language of homotopy type theory than homotopy theorists at the moment. It should be noted that there are many competing new languages for expressing the insights garnered by infinity groupoids. Though Voevodsky’s language is the only one that has any connection to computers, the competing language of quasi-categories is more popular.
I know you’re not supposed to quote yourself, but I came up with a cool saying about this a while back and I just want to share it.
Computer proof verification is like taking off and nuking the whole site from orbit: it’s the only way to be sure.
“It is one thing for you to say, ‘Let the world burn.’ It is another to say, ‘Let Molly burn.’ The difference is all in the name.”
-- Uriel, Ghost Story, Jim Butcher
I love the character of Uriel in the Dresden Files. I find his interpretation of the Fallen very interesting also.
-- Alfred Adler
ADDED: Source: http://en.wikiquote.org/wiki/Alfred_Adler
Quoted in: Phyllis Bottome, Alfred Adler: Apostle of Freedom (1939), ch. 5
Problems of Neurosis: A Book of Case Histories (1929)
Comedian Simon Munnery:
Wittgenstein, Culture and Value
Douglas Adams, Hitchhiker’s Guide to the Galaxy
Thanks for this one.. It’s been some time since I re-read Douglas Adams , and had forgotten how good he can be. It makes so much sense reading this right after reading “Bind yourself to Reality”. Had good long guffaw out of this one .:-)
Alan Sokal, What Is Science
A bigger danger is publication bias. collect 10 well run trials without knowing that 20 similar well run ones exist but weren’t published because their findings weren’t convenient and your meta-analysis ends up distorted from the outset.
Does anyone know how often this happens in statistical meta-analysis?
Fairly often. One strategy I’ve seen is to compare meta-analyses to a later very-large study (rare for obvious reasons when dealing with RCTs) and seeing how often the confidence interval is blown; usually much higher than it should be. (The idea is that the larger study will give a higher-precision result which is a ‘ground truth’ or oracle for the meta-analysis’s estimate, and if it’s later, it will not have been included in the meta-analysis and also cannot have led the meta-analysts into Milliken-style distorting their results to get the ‘right’ answer.)
For example: LeLorier J, Gregoire G, Benhaddad A, Lapierre J, Derderian F. “Discrepancies between meta-analyses and subsequent large randomized, controlled trials”. N Engl J Med 1997;337:536e42
(You can probably dig up more results looking through reverse citations of that paper, since it seems to be the originator of this criticism. And also, although I disagree with a lot of it, “Combining heterogenous studies using the random-effects model is a mistake and leads to inconclusive meta-analyses”, Al khalaf et al 2010.)
I’m not sure how much to trust these meta-meta analyses. If only someone would aggregate them and test their accuracy against a control.
As a percentage? No. But qualitatively speaking, “often.”
The most recent book I read discusses this particularly with respect to medicine, where the problem is especially pronounced because a majority of studies are conducted or funded by an industry with a financial stake in the results, with considerable leeway to influence them even without committing formal violations of procedure. But even in fields where this is not the case, issues like non-publication of data (a large proportion of all studies conducted are not published, and those which are not published are much more likely to contain negative results) will tend to make the available literature statistically unrepresentative.
We can’t know for certain. That’s the idea of systematic biases. There no way to tell if all your trials are slanted in a specific fashion, if the biases also appears in your high quality studies.
On the other hand we have fields such as homeopathy or telephathy (Ganzfeld experiments) where there are meta-analysis that treat all studies mostly equally that find that homeopathy works and telepahty exist. On the other hand you have meta-analysis who try to filter out low quality studies who come to the conclusion that homeopathy doesn’t work and telepathy doesn’t exist.
Sokal’s hoax was heroic
See also Jaynes’s comments on sampling error vs systematic biases (‘Emperor of China fallacy’) which I quote in http://www.gwern.net/DNB%20FAQ#flaws-in-mainstream-science-and-psychology
It is, in fact, a very good rule to be especially suspicious of work that says what you want to hear, precisely because the will to believe is a natural human tendency that must be fought.
- Paul Krugman
I’m suspicious of everything Paul Krugman says. I believe him to be MoreWrong on nearly every subject, and also, probably a sociopath. Doug Casey has him pegged right, as totally intellectually dishonest, …a total charlatan.
That’s as may be—but is the quote a bad heuristic or a good one?
It’s a good heuristic, now if only Krugman would actually follow it.
He has done at least once in the fairly recent past in regards to academic research that’s in line with his predispositions. He writes so many blog posts that it’s hard to find the link.
Jerry Spinelli, Stargirl
So as to keep the quote on its own, my commentary:
This passage (read at around age 10) may have been my first exposure to an EA mindset, and I think that “things you don’t value much anymore can still provide great utility for other people” is a powerful lesson in general.
-- Max Tegmark, Scientific American guest blog, 2014-02-04
I would think the first objection to that line of reasoning would be that we know General Relativity is an incomplete theory of reality and expect to find something that supersedes it and gives better answers regarding black holes.
Better answers, yes, but I’d expect the new answers to be at least quite like the GR answers. I mean, probably no singularities in the real theory, but lots of time-warping and space-whirling, surely. He only says ‘take seriously’, not ‘swallow whole including the self-contradictory bits’.
Well… Einstein didn’t need a complete theory of quantum electrodynamics to predict the coefficients of spontaneous emission from thermodynamical arguments; I don’t think Bekenstein and Hawking need a complete theory of quantum gravity to make predictions other than those of classical GR either.
-Timothy Gowers, on finding out a method he’d hoped would work, in fact would not.
I had been planning to post this (as in, had copied it from a text file saved for the purposes of this thread), saw it here, noted the fact, and then didn’t bother to upvote until just now. How odd.
http://www.reddit.com/r/askscience/comments/e3yjg/is_there_any_way_to_improve_intelligence_or_are/c153p8w
reddit user jjbcn on trying to improve your intelligence
If you’re not a student of physics, The Feynman Lectures on Physics is probably really useful for this purpose. It’s free for download!
http://www.feynmanlectures.caltech.edu/
It seems like the Feynman lectures were a bit like the Sequences for those Caltech students:
Trying to actually understand what equations describe is something I’m always trying to do in school, but I find my teachers positively trained in the art of superficiality and dark-side teaching. Allow me to share two actual conversations with my Maths and Physics teachers from school.:
(Physics class)
And yet to most people, I can’t even vent the ridiculousness of a teacher actually saying this; they just think it’s the norm!
Ahem:
For every EY quote, there exists an equal and opposite EY PC Hodgell quote:
(That was P.C. Hodgell, not EY.)
Good point, I’ll correct it.
Amusing, although I’ll point out that there are some subtle difference between a physics classroom and the MOR!universe. Or at least, I think there are...
I will only say that when I was a physics major, there were negative course numbers in some copies of the course catalog. And the students who, it was rumored, attended those classes were… somewhat off, ever after.
And concerning how I got my math PhD, and the price I paid for it, and the reason I left the world of pure math research afterwards, I will say not one word.
Were there tentacles involved? Strange ethereal piping? Anything rugose or cyclopean in character?
I think we can safely say there were non-Euclidean geometries involved.
Were there also course numbers with a non-zero complex part?
PEOPLE NEED TO STOP QUOTING PROFESSOR QUIRRELL LIKE HE IS ELIEZER YUDKOWSKY
HE IS NOT
THERE ARE SOME IMPORTANT DIFFERENCES AND THEY ARE VISIBLE AND THEY MATTER
THANK YOU
What level of school?
Secondary school.
A visit to wikipedia suggests that “secondary school” can refer to either what we in the U.S. call “middle school / junior high school”, or what we call “high school”. That’s a fairly wide range of grade levels. In which year of pre-university education are you?
Oh, okay. After I finish this year, I’ll study at school for one final year, and then go to university.
Edit: I am confused that this got five up votes, and would be interested in hearing an explanation from someone who up voted it.
I didn’t upvote you but I would have if you hadn’t mentioned it; it would have been because I appreciate people answering questions and finishing comment threads rather than leaving them hanging forever unresolved.
Cheers.
So you wanted to know not how to derive the solution but how to derive the derivation?
I wouldn’t blame the teacher for not going there. There’s not enough time in class to do something like that. Bringing the students to understand the presented math is hard enough. Describing the process of how this math was found, would take too long. Because especially for harder problems there were probably dozens of mathematicians who studied the problem for centuries in order to find those derivations that your teacher presents to you.
What’s wrong with saying something to the effect of “There’s a theorem—it’s not really within the scope of this course, but if you’re really interested it’s called the fixed-point theorem, you can look it up on Wikipedia or somewhere”?
Derive the derivation? Huh? And you say that’s different from ‘understanding’ it. No, I just didn’t have the most basic of intuitive ideas as to why he suddenly made an iterated equation, and I didn’t understand why it worked, at any level. It was all just abstract symbol manipulation with no content for me, and that’s not learning.
Furthermore, he does have the time. We have nine hours a week. With a class size of four pupils.
He may actually not know. People who teach maths are often not terribly good at it. Why don’t you post the equation and the thing he turned it into? One of us will probably be able to see what is going on.
In all fairness, at university, being lectured by people whose job was maths research and who were truly world class at it, I remember similar happenings. Although they have subtler ways of telling you to shut up. Figuring out what’s going on between the steps of a proof is half the fun and it tends to make your head explode with joy when you finally get it.
I just gave a couple of terms of first year maths lectures, stuff that I thought I knew well, and the effort of going through and actually understanding everything I was talking about turned what was supposed to be two hours a week into two days a week, so I can quite see why busy people don’t bother. And in the process I found a couple of mistakes in the course notes (that of course get passed down from year to year, not rewritten from scratch with every new lecturer).
In my school math education we had the standard that everything we learn get’s proved. If you are not in the habit of proving math, students are not well prepared for doing real math in university which is about mathematical proofs.
In general the math that’s not understood but memorized gets soon forgotten and is not worth teaching in the first place.
That’s a great rule, but it still has to have limits. Otherwise you couldn’t teach calculus without teaching number theory and set theory and probably some algebraic structures and mathematical logic too.
We actually did learn number theory, set theory, basic logic and algrebraic structures such as rings, groups and vector spaces.
In Germany every student has to select two subjects called “Leistungskurse” in which he gets more classes. In my case I selected math and physics which meant we had 5 hours worth of lessons in those subjects per week.
When I went to high school in Israel we had a similar system, but extra math wasn’t an option (at least not at my school).
A big part of an undergrad math (or CS) degree is spent on these subjects. I don’t believe the study everything, prove everything you do level is attainable with 5 hours per week for 3 years at the high-school level, even with a very good self-selected student group.
The German school system starts by separating students into 3 different kind of schools based on the academic skill of the student: Hauptschule, Realschule and Gymnasium. The Gymnasium is basically for those who go to university. That separation starts by school year 5 or 7 depending on where in Germany the school is located.
You have more than 3 years of math classes at school. I think proving stuff started at the 8 or 9 school year. At the beginning a lot of it focused on geometry.
At the time I think it was 4 hours of math per week for everyone. I think there were many cases where the students who were good at math had time to prove things while the more math adverse students took more time with the basic math problems.
What did the most advanced students (say, top 15%) study and prove by the end of highschool?
It’s been a while but before introducing calculus we did go through the axioms and theorems of limit of a function.
Peano’s axioms and how you it’s enough to prove things for n=0 and that n->n+1 were basis for proofs.
Your previous comment:
Might as well be a description of almost all the non-CS math content in my CS undergrad degree. (The only core subjects missing are probability and statistics). Of course, the depth and breadth and quality of treatment may still be different. But maybe an average high school in Israel is really that much worse than a good high school in Germany.
I now recall that my father, who went to high school in Kiev in the 70s, used to tell me that the math I learned in the freshman year, they learned in high school. (And they had only 10 years of school in total, ages 7 to 17, while we had 12, ages 6 to 18.) I always thought his stories may have been biased, because he went on to get a graduate degree in applied math and taught undergrad math at a respected Russian university. So I thought maybe he also went to a top high school and/or associated with other students who were good at math and enjoyed it.
But I know there is a wide distribution of math talent and affinity among people. There are definitely enough students for math-oriented schools, or extra math classes or programmes in large enough schools, at that level of teaching. I just assumed based on my own experience that the schools themselves wouldn’t be good enough to support this, or wouldn’t be incentivized correctly. But there’s no reason these problems should be universal.
In university students often spend time in large lectures in math classes. There’s no real to expect that to be a lot more effective than a 15 person course with a good teacher.
In our times the incentives go against teaching like this. in Berlin centralized math testing effectively means that all schools have to teach to the same test and that test doesn’t contain complicated proofs.
Yes, the difference between a math education at bad school with only 3 hours per week at the end and the math education at a good school in Germany with 5 hours per week might be the freshman year of a non-CS math content of a CS undergrad degree.
What is wrong with learning logic, set theory, and number theory before (or in the context of high school, instead of) calculus?
EDIT: Personally, I think going into computer science would have been easier if in high school I learned logic and set theory my last two years rather than trigonometry and calculus.
The thing that’s wrong is exactly that it would indeed have to be instead of calculus. And then students would not pass the nationally mandated matriculation exams or university entry exams, which test knowledge of calculus. One part of the system can’t change independently from the others. I agree that if you’re going to teach just one field of math, then calculus is not the optimal choice.
I do believe that for every field that’s taught in highschool, the most important theories and results should be taught: evolution, genetics, cell structure and anatomy in biology; Newtonian mechanics, electromagnetism and relativity in physics (QM probably requires too much math for any high-school program); etc.
There won’t be time to prove and fully explain everything that’s being shown, because time is limited, and it’s better that all the people in our society know about classical mechanics and EM and relativity, than that they know about just one of them but have studied and reproduced enough experiments to demonstrate that that one theory is true compared to all alternatives of similar complexity.
And similarly, I think it would be better if everyone knew about the fundamental results of all the important fields of math, than being able to prove a lot of theorems in a couple of fields on highschool exams but not getting to hear a lot of other fields.
Really? I think it’s very beautiful and it’s what hooked me. And it’s the bit the scientists use. What would you teach everyone instead?
As far as possible, we should allow students to learn more and help guide them to the sciences. But scientists are in the end a small minority of the population and some things are important to teach to everyone. I don’t think calculus passes that test, and neither does classic geometry and analytic geometry, which received a lot of time in my school.
Instead I would teach statistics, basic probability theory, programming (if you can sell it as applied math), basic set and number theory (e.g. countable and uncountable infinities, rational and real numbers), basic computer science with some important cryptography results given without proof (e.g. public-key encryption). At least one of these should demonstrate the concept of mathematical proofs and logic (set theory is a good candidate).
Interesting question. I’m a programmer who works in EDA software, including using transistor-level simulations, and I use surprisingly little math. Knowing the idea of a derivative (and how noisy numerical approximations to them can be!) is important—but it is really rare for me to actually compute one. It is reasonably common to run into a piece of code that reverses the transformation done by another pieces of code, but that is about it. The core algorithms of the simulators involves sophisticated math—but that is stable and encapsulated, so it is mostly a black box. As a citizen, statistics are potentially useful, but mostly just at the level of: This article quotes an X% change in something with N patients, does it look like N was large enough that this could possibly be statistically significant? But usually the problem with such studies in the the systematic errors, which are essentially impossible for a casual examination to find.
I see computer science as a branch of applied math which is important enough to be treated as a top-level ‘science’ of its own. Another way of putting it is that algorithms and programming are the ‘engineering’ counterpart to the ‘science’ of (the rest of) CS and math.
Programming very often involves math that is unrelated to the problem domain. For instance, using static typing relies on results from type theory. Cryptography (which includes hash functions, which are ubiquitous in software) is math. Functional languages in particular often embody complex mathematical structures that serve as design paradigms. Many data structures and algorithms rely on mathematical proofs. Etc.
That is also a fact that ought to be taught in school :-)
He doesn’t have to give proofs. Just explaining the intuition behind each formula doesn’t take that long and will help the students understand how and when to use them. Giving intuitions really isn’t esoteric trivia for advanced students, it’s something that will make solving problems easier for everyone relative to if they just memorized each individual case where each formula applies.
I suspect this is typical mind fallacy at work. There are many students who either can’t, or don’t want to, learn mathematical intuitions or explanations. They prefer to learn a few formulas and rules by rote, the same way they do in every other class.
Former teacher confirming this. Some students are willing to spend a lot of energy to avoid understanding a topic. They actively demand memorization without understanding… sometimes they even bring their parents as a support; and I have seen some of the parents complaining in the newspapers (where the complaints become very unspecific, that the education is “too difficult” and “inefficient”, or something like this).
Which is completely puzzling for the first time you see this, as a teacher, because in every internet discussion about education, teachers are criticized for allegedly insisting on memorization without understanding, and every layman seems to propose new ideas about education with less facts and more “critical thinking”. So, you get the impression that there is a popular demand for understanding instead of memorization… and you go to classroom believing you will fix the system… and there is almost a revolution against you, outraged kids refusing to hear any explanations and insisting you just tell them the facts they need to memorize for the exams, and skip the superfluous stuff. (Then you go back to internet, read more complaints about how teachers are insisting that kids memorize the stuff instead of undestanding, and you just give up any hope of a sane discussion.)
My first explanation was that understanding is the best way, but memorization can be more efficient in short term, especially if you expect to forget the stuff and never use it again after the exam. Some subjects probably are like this, but math famously is not. Which is why math is the most hated subject.
Another explanation was that the students probably never actually had an experience of understanding something, at least not in the school, so they literally don’t understand what I was trying to do. Which is a horrible idea, if true, but… that wouldn’t make it less true, right? Still makes me think: Didn’t those kids at least have an experience of something being explained by a book, or by a popular science movie? Probably most of them just don’t read such books or watch those movies. -- I wonder what would happen if I just showed the kids some TED videos; would they be interested, or would they hate it?
By the way, this seems not related to whether the topic is difficult. Even explaining how easy things work can be met by resistance. This time not because it is “too difficult”, but because “we should just skip the boring simple stuff”. (Of course, skipping the boring simple stuff is the best recipe to later find the more advanced stuff too difficult.) I wonder how much impact here has the internet-induced attention deficit.
Speaking as a student: I sympathize with Benito, have myself had his sort of frustration, and far prefer understanding to memorization… yet I must speak up for the side of the students in your experience. Why?
Because the incentives in the education system encourage memorization, and discourage understanding.
Say I’m in a class, learning some difficult topic. I know there will be a test, and the test will make up a big chunk of my grade (maybe all the tests together are most of my grade). I know the test will be such that passing it is easiest if I memorize — because that’s how tests are. What do I do?
True understanding in complex topics requires contemplation, experimentation, exploration; “playing around” with the material, trying things out for myself, taking time to think about it, going out and reading other things about the topic, discussing the topic with knowledgeable people. I’d love to do all of that...
… but I have three other classes, and they all expect me to read absurd amounts of material in time for next week’s lecture, and work on a major project apiece, and I have no time for any of those wonderful things I listed, and I have had four hours of sleep (and god forbid I have a job in addition to all of that) and I am in no state to deeply understand anything. Memorizing is faster and doesn’t require such expenditures of cognitive effort.
So what do I do? Do I try to understand, and not be able to understand enough, in time for the test on Monday, and thus fail the class? Or do I just memorize, and pass? And what good do your understanding-based teaching techniques do me, if you’re still going to give me tests and base my grade on them, and if the educational system is not going to allow me the conditions to make my own way to true understanding of the material?
None. No good at all.
Ah. I think this is why I’m finding physics and maths so difficult, even though my teachers said I’d find it easy. It’s not just that the teachers have no incentive to make me understand, it’s that because teachers aren’t trained to teach understanding, when I keep asking for it, they don’t know how to give it… This explains a lot of their behaviour.
Even when I’ve sat down one-on-one with a teacher and asked for the explanation of a piece of physics I totally haven’t understood, they guy just spoke at me for five/ten minutes, without stopping to ask me if I followed that step, or even just to repeat what he’d said, and then considered the matter settled at the end without questions about how I’d followed it. The problem with my understanding was at the beginning as well, and when he stopped, he finished as if delivering the end of a speech, as though it were final. It would’ve been a little awkward for me to ask him to re-explain the first bit… I thought he was a bad teacher, but he’s just never been incentivised to continually stop and check for understanding, after deriving the requisite equations.
And that’s why my maths teacher can never answer questions that go under the surface of what he teaches… I think he’d be perfectly able to understand it on the level to give me an explanation, as when I push him he does, but otherwise…
His catchphrase in our classroom is “In twenty years of questioning, nobody’s ever asked me that before.” He then re-assures us that it’s okay for us to have asked it, as he assumes we think that having asked a new question is a bad thing...
Edit: Originally said something arrogant.
If you’re really curious, have you considered a private maths tutor? I wouldn’t go anywhere near the sort of people who help people cram for exams, but if there’s a local university you might find a maths student (even an undergrad would be fine) who’d actually enjoy talking about this sort of thing and might be really grateful for a few pounds an hour.
Hell, if you find someone who really likes the subject and can talk about it you may only have to buy them a coffee and you’ll have trouble getting them to shut up!
Thanks for the tip, and no, I hadn’t considered going out and looking for maths students. I mainly spend my time reading good textbooks (i.e. Art of Problem Solving). I had a maths tutor once, although I didn’t get out of it what I wanted.
Why do you think that?
Oops, I didn’t mean to sound quite so arrogant, and I merely meant in the top bit of the class. If you do want to know my actual reasons for thinking so, off the top of my head I’d mention teachers saying so generally, teachers saying so specifically, performance in maths competitions, a small year group such that I know everyone in the class fairly well and can see their abilities, observation of marks (grades) over the past six years, and I get paid to tutor maths to students in lower years.
Still, edited.
Word of advice: don’t put too much attention into your “potential”. That’s an unfalsifiable hypothesis that you can use to inflate your ego without actually, you know, being good. Look at your actual results, and only those.
I schlepped through physics degree without understanding much of anything, and then turned to philosophy to solve the problem...the rest is ancient history.
From what I hear, philosophy is mostly ancient history.
It’s mostly mental masturbation where ancient history plays the role of porn...
writes down in list of things people have actually said to me
Kinda like this site. :-)
This site has different preferences in pr0n :-P
I had this experience in a context of high school, with no homework and no additional study at home.
None of the students’ classes assigned any homework?!
Some of them probably did, but most didn’t. The “no homework and no additional study at home” part was meant only for computer science, which I taught.
This is not usually true in the context of physics. I recently taught a physics course, the final was 3 questions, the time limit was 3 hours. Getting full credit on a single question was enough for an A. Memorization fails if you’ve never seen a question type before.
Not all tests are like that. I had plenty of tests in math that did require understanding to get a top mark. Memorization can get you enough points to pass the test but not all points.
It’s more useful than that, even.
There are also times where the problem isn’t necessarily memorization, but just lapse of insight that makes it hard to realize that a problem as presented matches one of your pre-canned equations, even though it can be solved with one of them. Panic sets in, etc.
In situations like that, particularly in those years when you have calculus and various transforms in your toolkit (even if they aren’t strictly /expected/), you can solve the problem with those power tools instead, and having understood and being able to derive solutions to closely related problems from basic principles ought to be fairly predictive of you being able to generate a correct answer in those situations.
What do you think about these other possible explanations?
Some of these students really can’t learn to prove mathematical theorems. If exams required real understanding of math, then no matter how much these students and their teachers tried, with all the pedagogical techniques we know today, they would fail the exams.
These students really have very unpleasant subjective experiences when they try to understand math, a kind of mental suffering. They are bad at math because people are generally bad at doing very unpleasant things: they only do the absolute minimum they can get away with, so they don’t get enough practice to become better, and they also have trouble concentrating at practice because the experience is a bad one. Even if they can improve with practice, this would mean they’ll never practice enough to improve. (You may think that understanding something should be more fun than rote learning, and this may be true for some of them, but they never get to actually understand enough to realize this for themselves.)
The students are just time-discounting. They care more about not studying now, then about passing the exam later. Or, they are procrastinating, planning to study just before the exam. An effort to understand something takes more time in the short term than just memorizing it; it only pays off once you’ve understood enough things.
The students, as a social group, perceive themselves as opposed to and resisting the authority of teachers. They can’t usually resist mandatory things: attending classes, doing homework, having to pass exams; and they resent this. Whenever a teacher tries to introduce a study activity that isn’t mandatory (other teachers aren’t doing it), students will push back. Any students who speak up in class and say “actually I’m enjoying this extra material/alternative approach, please keep teaching it” would be betraying their peers. This is a matter of politics, and even if a teacher introduces non-mandatory or alternative techniques that are really objectively fun and efficient, students may not perceive them as such because they’re seeing them as “extra study” or “extra oppression”, not “a teacher trying to help us”.
It could be different explanations for different people. This said, options 1 and 2 seem to contradict with my experience that students object even against explaining relatively simple non-mathy things. My experience comes mostly from high school where I taught everything during the lessons, no homeword, no home study; this seems to rule out option 3.
Option 4 seems plausible, I just feel it is not the full explanation, it’s more like a collective cooperation against something that most students already dislike individually.
I’m closer to the typical mind than most people here with regard to math. I deeply loved humanities and thought of math and mathy fields as completely sterile and lifeless up until late high school, when I first realized that there was more to math than memorizing formulas. And then boom it became fun and also dramatically easier. Before that I didn’t reject the idea of learning using mathematical intuitions, I just had no idea that mathematical intuitions were a thing that could exist.
I suspect that most people learn school-things by rote simply because they don’t realize that school-things can be learned another way. This is evidenced by how people don’t choose to learn things they actually find interesting or useful by rote. There are quite a few people out there who think “book smarts” and “street smarts” are completely separate things and they just don’t have book smarts because they aren’t good at memorizing disjointed lists of facts.
This is hard to test. What we need here are studies that test different methods of teaching math on randomly selected people.
Of course people self-selecting to participate in the study would ruin it, and most people hate math after the experience and wouldn’t participate unless paid large sums.
On the other hand, a study of highschool students who are forced to participate also isn’t very useful because the fact of forcing students to study may well be the major reason why they find it a not fun experience and don’t study well.
If they get a few formulas and rules by rote, but can’t figure out when to apply them because they lack understanding, what does that actually get them?
It’s not a waste of time to give them a chance of getting something out of it, even if they’re almost certainly doomed in this regard.
I’m not saying it’s a bad thing in itself, but there’s usually not enough time in class to do it; it comes at the expense of the rote learning which these students need to pass the exams.
This is very much true, as I was one of those students myself. I did care about passing exams, not learning math.
I haven’t seen them mentioned in this thread, so thought I’d add them, since they’re probably valid and worth thinking about:
the utility of a math understanding, combined with the skills required for doing things such as mathematical proofs (or having a deep understanding of physics) is low for most humans. much lower than rote memorization of some simple mathematical and algebraic rules. consider, especially, the level of education that most will attain, and that the amount of abstract math and physics exposure in that time is very small. teaching such things in average classrooms may on average be both inefficient and unfair to the majority of students. you’re looking for knowledge and understanding in all the wrong places.
the vast majority of public education systems are, pragmatically speaking, tools purpose built and designed to produce model citizens, with intelligence and knowledge gains seen as beneficial but not necessary side effects. ie, as long as the kids are off the streets—if they’re going to get good jobs as a side effect, that’s a bonus. you’re using the wrong tools, for the job (either use better tools, or misuse the tools you have to get the job you want done, right)
I’ve noticed that one of the biggest thing holding me back in math/physics is an aversion to thinking too hard/long about math and physics problems. It seems to me that if I was able to overcome this aversion and math was as fun as playing video games I’d be a lot better at it.
You have to want to be a wizard.
Plenty of us took the Wizard’s Oath as kids and still have a hard time in math classes sometimes.
I think everyone has trouble in math class, eventually.
From here. Or as I just think of it, if you don’t at least have a hard time sometimes, if not fail sometimes, you’re not shooting high enough.
If I don’t get a game over at least once, the game is too easy.
Is that an Umeshism?
Almost, but not quite. “If you never get a game over, you’re playing games that are too easy” would indeed be a Umeshism, but this is a complaint about easy games rather than a suggestion that I should be playing harder ones.
Not in my experience, unless you’re talking about trouble teaching them. It’s very possible to run out of classes before you hit anything truly difficult (in my country there are no more classes after Masters level, a PhD student is expected to be doing research—the american notion of “all but dissertation” provokes endless amusement, here you’re “all but dissertation” from day 1).
A system where a non-genius math student never faces a challenging math class would probably “provoke endless amusement” from an American grad student, since to them it means that the program is too weak to be considered serious.
If you literally never had trouble in math class, you are a rare mind of the Newton/Gauss calibre, and you should go get your Field’s medal before you are 40 :).
I had trouble in my Masters (a combination of course choice and bad luck) and so didn’t do a PhD. But we’re talking about the top university in at least the country, and by some accounts the hardest non-research course in the world. I’m pretty sure that going a different route I could’ve got to the point of starting a PhD before hitting anything difficult.
I do sometimes think I should’ve chased the Fields medal, but I’m ultimately happier the way things turned out. I worked my ass off the whole time in school/university; nowadays I earn a good living doing fun things, but my evenings and weekends are my own, and I’ve got a much better social life.
Huh. Yes, I guess that in retrospect I wouldn’t be the only one.
This is your secret?
You have to want to learn how to be a wizard.
You have to like to learn how to be a wizard.
Good video games are designed to be fun, that is their purpose. Math, um, not so much.
And at least some math instructors effectively teach that if you aren’t already finding (their presentations of) math fascinating, that you must just not be a Math Person.
Math is a bit like liftening weights. Sitting in front of a heavy mathematical problem is challenging. The job of a good teacher isn’t to remove the challenge. Math is about abstract thinking and a teacher who tries to spare his students from doing abstract thinking isn’t doing it right.
Deliberate practice is mentally taxing.
The difficult thing as a teacher is to motivate the student to face the challenge whether the challenge is lifting weights or doing complicated math.
The job of a good teacher is to find a slightly less challenging problem, and to give you that problem first. Ideally, to find a sequence of problems very smoothly increasing in difficulty.
Just like a computer game doesn’t start with the boss fight, although some determined players would win that, too.
No. Being good at math is about being able to keep your attention on a complicated proof even if it’s very challenging and your head seems like it’s going to burst.
If you want to build muscles you don’t slowly increase the amount of weight and keep it at a level where it’s effortless. You train to exhaustion of given muscles.
Building mental stamina to tackle very complicated abstract problems that aren’t solvable in five minutes is part of a good math education.
Deliberate practice is supposed to feel hard. A computer game is supposed to feel fun. You can play a computer game for 12 hours. A few hours of delibrate practice are on the other usually enough to get someone to the rand of exhaustion.
If you only face problems in your education that are smooth like a computer game, you aren’t well prepared for facing hard problems in reality. A good math education teaches you the mindset that’s required to stick with a tough abstract problem and tackle it head on even if you can’t fully grasp it after looking 30 minutes at it.
You might not use calculus at your job, but if your math education teaches you the ability to stay focused on hard abstract problems than it fulfilled it’s purpose.
You can teach calculus by giving the student concrete real world examples but that defeats the point of the exercise. If we are honest most students won’t need the calculus at their job. It’s not the point of math education. At least in the mindset in which I got taught math at school in Germany.
You don’t put on so much weight than you couldn’t possibly lift it, either (nor so much weight that you could only lift it with atrocious form and risk of injury, the analogue of which would be memorising a proof as though it was a prayer in a dead language and only having a faulty understanding of what the words mean).
Yes, memorizing proof isn’t the point. You want to derive proofs. I think it’s perfectly fine to sit 1 hours in front of a complicated proof and not be able to solve the proof.
A ten year old might not have that mental stamia, but a good math education should teach it, so it’s there by the end of school.
This kind of philosophy sounds like it’s going to make a few people very good at tackling hard problems, while causing everyone else to become demotivated and hate math.
Motivation has a lot to do with knowing why you are engaging in an action. If you think things should be easy and they aren’t you get demotivated. If you expect difficulty and manage to face it then that doesn’t destroy motivation.
I don’t think getting philosophy right is easy. Once things that my school teachers got very wrong was believing in talents instead of believing in a growth mindset.
I did identify myself as smart so I didn’t learn the value of putting in time to practice. I tried to get by with the minimum of effort.
I think Cal Newport wrote a lot of interesting things about how a good philosophy of learning would look like.
There a certain education philosophy that you have standardized tests, than you do gamified education to have children score on those tests. Student have pens with multiple colors and are encouraged to draw mind maps. Afterwards the students go to follow their passions and live the American dream. It fits all the boxes of ideas that come out of California.
I’m not really opposed to someone building some gamified system to teach calculus but at the same time it’s important to understand the trade offs. We don’t want to end up with a system where the attention span that students who come out of it is limited to playing games.
I think that the way good games teach things is basically being engaging by constantly presenting content that’s in the learner’s zone of proximal development, offering any guidance needed for mastering that, and then gradually increasing the level of difficulty so as to constantly keep things in the ZPD. The player is kept constantly challenged and working at the edge of their ability, but because the challenge never becomes too high, the challenge also remains motivating all the time, with the end result being continual improvement.
For example, in a game where your character may eventually have access to 50 different powers, throwing them at the player all at once would be overwhelming when the player’s still learning to master the basic controls. So instead the first level just involves mastering the basic controls and you have just a single power that you need to use in order to beat the level, then when you’ve indicated that you’ve learned that (by beating the level), you get access to more powers, and so on. When they reach the final level, they’re also likely to be confident about their abilities even when it becomes difficult, because they know that they’ve tackled these kinds of problems plenty of times before and have always eventually been successful in the past, even if it required several tries.
The “math education is all about teaching people how to stay focused on hard abstract problems” philosophy sounds to me like the equivalent of throwing people at a level where they had to combine all 50 powers in order to survive, right from the very beginning. If you intend on becoming a research mathematician who has to tackle previously unencountered problems that nobody has any clue of how to solve, it may be a good way of preparing you for it. But forcing a student to confront needlessly difficult problems, when you could instead offer a smoothly increasing difficulty, doesn’t seem like a very good way to learn in general.
When our university began taking the principles of something like cognitive apprenticeship—which basically does exactly the thing that Viliam Bur mentioned, presenting problems in a smoothly increasing difficulty as well as offering extensive coaching and assistance—and applying it to math (more papers), the end result was high student satisfaction even while the workload was significantly increased and the problems were made more challenging.
Not only research mathematicians but basically anyone who’s supposed to research previously unencountered problems. That’s the ability that universities are traditionally supposed to teach.
If that’s not what you want to teach, why teach calculus in the first place? If I need an integral I can ask a computer to calculate the integral for me. Why teach someone who wants to be a software engineer calculus?
There a certain idea of egalitarianism according to which everyone should have an university education. That wasn’t the point why we have universities. We have universities to teach people to tackle previously unencountered problems.
If you want to be a carpenter you don’t go to university but be an apprentice with an existing carpenter. Universities are not structured to be good at teaching trades like carpenting.
Isn’t that rather “problems that can’t be solved using currently existing mathematics”? If it’s just a previously unencountered problem, but can be solved using the tools from an existing branch of math, then what you actually need is experience from working with those tools so that you can recognize it as a problem that can be tackled with those tools. As well as having had plenty of instruction in actually breaking down big problems into smaller pieces.
And even those research mathematicians will primarily need a good and thorough understanding of the more basic mathematics that they’re building on. The ability to tackle complex unencountered problems that you have no idea of how to solve is definitely important, but I would still prioritize giving them a maximally strong understanding of the existing mathematics first.
But I wasn’t thinking that much in the context of university education, more in the context of primary/secondary school. Math offers plenty of general-purpose contexts that may greatly enhance one’s ability to think in precise terms: to the extent that we can make the whole general population learn and enjoy those concepts, it might help raise the sanity waterline.
I agree that calculus probably isn’t very useful for that purpose, though. A thorough understanding of basic statistics and probability would seem much more important.
There an interesting paper about how doing science is basically about coping with feeling stupid.
No matter whether you do research in math or whether you do research in biology, you have to come to terms with tackling problems that aren’t easily solved.
One of the huge problems with Reddit style New Atheists is that they don’t like to feel stupid. They want their science education to be in easily digestible form.
I agree, that’s an important skill and probably undertaught.
Nobody understands all math. For practical purposes it’s often more important to know which mathematical tools exist and having an ability to learn to use those tools.
I don’t need to be able to solve integrals. It’s enough to know that integrals exists and that Wolfram Alpha will solve them for me.
I’m not saying that one shouldn’t spend any time on easy exercises. Spending a third of the time on problems that are really hard might be a ratio that’s okay.
Statistics are important, but it’s not clear that math statistics classes help. Students that take them often think that real world problems follow a normal distribution.
Calculus isn’t as important to software engineering as some other branches of math, but it can still be handy to know. I’ve mostly encountered it in the context of physical simulation: optics stuff for graphics rendering, simplified Navier-Stokes for weather simulation, and orbital mechanics, to name three. Sometimes you can look up the exact equation you need, but copying out of the back of a textbook won’t equip you to handle special cases, or to optimize your code if the general solution is too computationally expensive.
Even that is sort of missing the point, though. The reason a lot of math classes are in a traditional CS curriculum isn’t because the exact skills they teach will come up in industry; it’s because they develop abstract thinking skills in a way that classes on more technical aspects of software engineering don’t. And a well-developed sense of abstraction is very important in software, at least once you get beyond the most basic codemonkey tasks.
To that extend the CS curriculum shouldn’t be evaluated by how well people do calculus but how well they do teach abstract thinking. I do think that the kind of abstract thinking where you don’t know how to tackle a problem because the problem is new is valuable to software developers.
This is a very strong set of assertions which I find deeply counter intuitive. Of course that doesn’t mean it isn’t true. Do you have any evidence for any of it?
Which one’s do you find counter intuitive? It’s a mix of referencing a few very modern ideas with more traditional ideas of education while staying away from the no-child-left-behind philosophy of education.
I can make any of the points in more depths but the post was already long, and I’m sort of afraid that people don’t read my post on LW if they get too long ;) Which ones do you find particularly interesting?
Of course bad instructors can say this as easily as good ones.
But isn’t it true to say that if you have reasonably wide experience with different presentations of math, and you don’t find any of them fascinating, then you’re probably not a Math Person? Or do Math People not exist as a natural category?
I’d be ever so interested in the answer to this question. It seems really obvious that some people are good at maths and some people aren’t.
But it’s also really obvious that some people like sprouts. And it turns out as far as I’m aware that it’s possible to like sprouts for both genetic and environmental reasons. I’d love to know the causes of mathematical ability. Especially since it seems to be possible to be both ‘clever’ and ‘bad at maths’. Does anyone know what the latest thinking on it is?
My recent experiences trying to design IQ tests tell me that that’s both innate and very trainable. In fact I’d now trust the sort of test that asks you how to spell or define randomly chosen words much more than the Raven’s type tests. It’s really hard to fake good speling, whereas the pattern tests are probably just telling you whether you once spent half an hour looking closely at the wallpaper. Which is exactly the reverse of the belief that I started with.
Related: some people believe that programming talent is very innate and people can be sharply separated into those who can and cannot learn to write code. Previously on LW here, and I think there was an earlier more substantive post but I can’t find it now. See also this. Gwern collected some further evidence and counterevidence.
It was probably mentioned in the earlier discussions, but I believe the “two humps” pattern can easily be explained by bad teaching. If it hapens in the whole profession, maybe no one has yet discovered a good way to teach it, because most of the people who understand the topic were autodidacts.
As a model, imagine that a programming ability is a number. You come to school with some value between 0 and 10. A teacher can give you +20 bonus. Problem is, the teacher cannot explain the most simple stuff which you need to get to level 5; maybe because it is so obvious to the teacher that they can’t understand how specifically someone else would not already understand it. So the kids with starting values between 0 and 4 can’t follow the lessons and don’t learn anything, while the kids with starting values 5 to 10 get the +20 bonus. At the end, you get the “two humps”; one group with values 0 to 4, another group with values 25 to 30. -- And the worst part is that this belief creates a spiral, because when everyone observed the “two humps” at the adult people, then if some student with starting value 4 does not understand the lesson, we don’t feel a need to fix this; obviously they were just not meant to understand programming.
What are those starting concepts that some people get and some people don’t? Probably things like “the computer is just a mechanical thing which follows some mechanical rules; it has no mind, and it doesn’t really understand anything”, but you need to feel it in the gut level. (Maybe aspies have a natural advantage here, because they don’t expect the computer to have a mind.) It could probably help to play with some simple mechanical machines first, where the kids could observe the moving parts. In other words, maybe we don’t only need specialized educational software, but also hardware. A computer in a form of a black box is already too big piece of magic, prone to be anthropomorphized. You should probably start with a mechanical typewriter and a mechanical calculator.
A lot of effort has gone into trying to invent ways of teaching programming to complete newbies. If really no-one has succeeded at all, then maybe it’s time to seriously consider that some people can’t be taught.
A claim that someone cannot be taught by any possible intervention would be a very strong claim indeed, and almost certainly false. But a claim that no-one knows how to teach this even though a lot of people have tried and failed for a long time now, makes predictions pretty similar to the theory that some people simply can’t be taught.
This model matches the known facts, but it doesn’t tell us what we really want to know. What determines what value people start out with? Does everyone start out with 0 and some people increase their value in unknown, perhaps spontaneous ways? Or are some people just born with high values and they’ll arrive at 5 or 10 no matter what they do, while others will stay at 0 no matter what?
I don’t know if educators have tried teaching the concepts you suggest explicitly.
http://www.eis.mdx.ac.uk/research/PhDArea/saeed/
The researcher didn’t distinguish the conjectured cause (bimodal differences in students’ ability to form models of computation) from other possible causes (just to name one — some students are more confident, and computing classes reward confidence).
And the researcher’s advisor later described his enthusiasm for the study as “prescription-drug induced over-hyping” of the results …
Clearly further research is needed. It should probably not assume that programmers are magic special people, no matter how appealing that notion is to many programmers.
Once upon a time, it would have been a radical proposition to suggest that even 25% of the population might one day be able to read and write. Reading and writing were the province of magic special people like scribes and priests. Today, we count on almost every adult being able to read traffic signs, recipes, bills, emails, and so on — even the ones who do not do “serious reading”.
A problem with programming education is that it is frequently unclear what the point of it is. Is it to identify those students who can learn to get jobs as programmers in industry or research? Is it to improve students’ ability to control the technology that is a greater and greater part of their world? Is it to teach the mathematical concepts of elementary computer science?
We know why we teach kids to read. The wonders of literature aside, we know full well that they cannot get on as competent adults if they are literate. Literacy was not a necessity for most people two thousand years ago; it is a necessity for most people today. Will programming ever become that sort of necessity?
That was the thinking at the dawn of personal computing, back in the 80s.
Turns out the answer is “no”.
“Not yet.”
You think the general population the future will hacking code into text editors? That isn’t even ubiquitous in the industry, since you can call yourself a developer if you only know how to us graphical tools. They’ll be doing something, but it will be analogous to electronic music production as opposed .tk p.suing an instrument.
Computing hasn’t even existed for a century yet. Give it time.
There will come a day when ordinary educated folks quicksort their playing cards when they want to put them in order. :)
I insertion sort. :P
Doesn’t almost everyone? I’ve always heard that as the inspiration for insertion sorting.
No way, I pigeonhole sort.
My bet would be on childhood experience. For example the kinds of toys used. I would predict a positive effect of various construction sets. It’s like “Reductionism for Kindergarten”. :D
The silent pre-programming knowledge could be things like: “this toy is interacted with by placing its pieces and observing what they do (or modelling in one’s mind what they would do), instead of e.g. talking to the toy and pretending the toy understands”.
An anecdatum. The only construction set I had as a boy was lego, and my little sister played with it too. As far as I know, there was no feeling that it was my toy only. We’re five years apart so all my stuff got passed down or shared.
My sister’s very clever. We both did degrees in the same place, mine maths and hers archaeology.
She’s never shown the slightest interest in programming or maths, whereas I remember the thunderbolt-strike of seeing my first computer program at ten years old, long before I’d ever actually seen a computer. I nagged my parents obsessively for one until they gave in, and maths and programming have been my hobby and my profession ever since.
I distinctly remember trying to show Liz how to use my computer, and she just wasn’t interested.
My parents are entirely non-mathematical. They’re both educated people, but artsy. Mum must have some natural talent, because she showed me how to do fractions before I went to school, but I think she dropped maths at sixteen. I think it’s fair to say that Dad hates and fears it. Neither of them knew the first thing about computers when I was little. They just weren’t a thing that people had in the 70s, any more than hovercraft were.
Every attempt my school made to teach programming was utterly pointless for me, I either already knew what they were trying to teach or got it in a few seconds.
The only attempts to teach programming that have ever held my attention or shown me anything interesting are SICP, and the algorithms and automata courses on Coursera, all of which I passed with near-perfect scores, and did for fun.
So from personal experience I believe in ‘natural talent’ in programming. And I don’t believe it’s got anything to do with upbringing, except that our house was quiet and educated.
You’d have had to work quite hard to stop me becoming a programmer. And I don’t think anything in my background was in favour of me becoming one. And anything that was should have favoured my sister too.
And another anecdote:
I’ve got two friends who are talented maths graduates, and somehow both of them had managed to get through their first degrees without ever writing programs. Both of them asked me to teach them.
The first one I’ve made several attempts with. He sort-of gets it, but he doesn’t see why you’d want to. A couple of times he’s said ‘Oh yes, I get it, sort of like experimental mathematics’. But any time he gets a problem about numbers he tries to solve it with pen and paper, even when it looks obvious to me that a computer will be a profitable attack.
The second, I spent about two hours showing him how to get to “hello world” in python and how to fetch a web page. Five days later he shows me a program he’s written to screen-scrape betfair and place trades automatically when it spots arbitrage opportunities. I was literally speechless.
So I reckon that whatever-makes-you-a-mathematician and whatever-makes-you-a-programmer might be different things too. Which is actually a bit weird. They feel the same to me.
That seems like rather a strong claim. Everyone who can program now was a complete newbie at some point. Presumably they did not learn by a bolt of divine inspiration out of the blue sky.
The sources linked above claim that some can be taught, and some (probably most of the population) can’t, no matter what you do. And of those who can learn, many become autodidacts in a suitable environment.
Of course they don’t reinvent programming themselves, they do learn it from others, but the same could be said of any skill or knowledge. And yet there are skills which clearly have very strong inborn dispositions. It’s being claimed that programming is such a skill, and an extreme one at that, with a sharply bimodal distribution.
Bad teaching? There’s an even simpler explanation (at least regarding programming): autodidacts with previous experience versus regular students without previous experience. The fact that the teaching is often geared towards the students with previous experience and suffers from a major tone of “Why don’t you know this already?” throughout the first year or two of undergrad doesn’t help a bit.
“I can teach you this only if you already know it” seems like bad teaching to me. Not sure if we are not just debating definitions here.
I don’t think we’re even debating.
Yes, that is the definition of bad teaching. My assertion is that CS departments have gotten so damn complacent about receiving a steady stream of autodidact programmers as their undergrad entrants that they’ve stopped bothering with actually teaching low-level courses. They assign work, they expect to receive finished work, they grade the finished work, but it all relies on the clandestine assumption that the “good students” could already do the work when they entered the classroom.
Exactly.
Only a small fraction of math has practical applications, the majority of math exists for no reason other than thinking about it is fun. Even things with applications had sometimes been invented before those applications were known. So in a sense most math is designed to be fun. Of course it’s not fun for everyone, just for a special class of people who are into this kind of thing. That makes it different from Angry Birds. But there are many games which are also only enjoyed by a specific audience, so maybe the difference is not that fundamental. A large part of the reason the average person doesn’t enjoy math is that unlike Angry Birds math requires some effort, which is the same reason the average person doesn’t enjoy League Of Evil III.
Spot on. Pure, fun math does benefit society directly in at least one way, however, in that the opportunity to engage in it can be used to lure very smart people into otherwise unpalatable teaching jobs.
In fact, that seems to be the main point of “research” in most less-than-productive fields (i.e. the humanities).
Is it clear that this is in the best interests of society? It would seem to me the end result is bad teaching. Back when I was in undergrad, the best researchers were the worst teachers (for obvious reasons- they were focused on their research and didn’t at all care about teaching).
When I was in grad school in physics, the professor widely considered the strongest teacher was denied tenure (cited AGAINST him in the decision was that he had written a widely used textbook),etc.
Also, the desire for tenured track profs to dodge teaching is why the majority of math classes at many research institutions were taught by grad students.
Interesting. Did there seem to be any pedagogical benefit to having relatively easy access to research-level experts, though?
In graduate school, for special topics class there were usually only 1 or 2 professors that COULD teach a certain class (and only 3 or 4 students interested in taking it)- so when you are talking cutting edge research topics, its a necessity to have a researcher because no one else will be familiar enough with whats going on in the field.
Outside of that, not really. Good teaching takes work, so if you put someone in front of the class whose career advancement requires spending all their time on research, then the teaching is just a potentially career destroying distraction. Also, at the intro level, subject-pedagogy experts tend to do better (i.e. the physics education group were measurably more effective at teaching physics than other physics groups. So much so that I think now they exclusively teach the large physics courses for engineers)
I mean, it’s easier to get research positions with those professors, and those are learning experiences, but the students generally get very little out of it during the actual class.
Thinking for a long time is one of the classic descriptions of Newton; from John Maynard Keynes’s “Newton, the Man”:
Paul Graham also mentions focus in this article.
I think math is more fun than playing video games. But I guess it’s subjective.
Lucky you.
He brags shamelessly about his wide variety of interests: Drumming, lockpicking, PUA, biology, Tana Tuva, etc.
The Feynman divorce:
You’re right.
Indeed, terse “explanations” that handwave more than explain are a pet peeve of mine. They can be outright confusing and cause more harm than good IMO. See this question on phrasing explanations in physics for some examples.
-- many different people, most recently user chipaca on HN
Hmm, what about such things as feeling that you need to defend the truth from criticism rather than find a way to explain it better? Or nagging doubts that you’re ignoring, or a feeling that your opponents are acting the way they are because they’re stupid or evil? Or wanting to censor someone else’s speech? I take all these things as alarm signals.
A communist friend of mine once said, after I’d nailed her into a corner in a political argument about appropriate rates of pay during a fireman’s strike, “Well under socialism there wouldn’t be as many fires.”. I reckon that there must be a feeling associated with that sort of thing.
Defending the truth from criticism also feels exactly the same as defending what you wrongly think is the truth from criticism.
The feelings you list correspond to very common ways people behave. So they’re very weak evidence that you’re wrong about something. Unless you’re a trained rationalist who very rarely has these feelings / behaviors.
Most people first acquire a belief—whether by epistomologically legitimate ways or not—and then proceed to defend it, ignore contrary evidence and feel opponents to be stupid, because that’s just the way most people deal with beliefs that are important to them.
This is the most forceful version I’ve seen (assumed it had been posted before, discovered it probably hasn’t, won’t start a new thread since it’s too similar):
Kathryn Schulz, Being Wrong
But I’m not comfortable endorsing either of these quotes without a comment.
chipaca’s quote (and friends) suggest to me that
my “being wrong” and “being right” are complementary hypotheses, and
my subjective feelings are not evidence either way.
Schulz’s quote (and book) suggest to me that
my “being wrong” is broadly and overwhelmingly true (my map is not the territory), and
my subjective feeling of being right is in fact evidence that I am very wrong.
I’d prefer to emphasize that “You are already in trouble when you feel like you’re still on solid ground,” or said another way:
Becoming less wrong feels different from the experience of going about my business in a state that I will later decide was delusional.
Schulz hasn’t been quoted here before, but you might’ve seen my use of that quote on http://www.gwern.net/Mistakes to which I will add a quote of Wittgenstein making the same quote but much more compressed and concisely:
It occurs to me that “being wrong” can be divided into two subcategories—before and after you start seeing evidence or arguments which undermine your position.
With practice, the feeling of being right and seeing confirming information can be distinguished from the feeling of being wrong and seeing undermining information. Unfortunately, the latter feeling is very uncomfortable and it is always tempting look for ways to lessen it.
from The Last Samurai by Helen DeWitt
-Daniel Dennett, Intuition Pumps and Other Tools for Thinking, Chapter 18 “The Intentional Stance” [Bold is original]
Reminded me of the idea of ‘hacking away at the edges’.
As far as I understand, he actually does define his terms. Dennett defines a mind as a rational agent/decision algorithm (subject to evolutionary baggage and bugs in the algorithm). Please correct me if I’m wrong.
At this point in the book, he certainly hasn’t reached that conclusion. He’s merely given parameters under which taking the Intentional Stance is a good idea; when it’s useful to treat something as having a mind, beliefs, desires, etc. This, he says, will be a useful stepping stone to figuring out what minds and beliefs and desires really are, and how to know where they exist in this world.
-- Henry Hazlitt, Economics in One Lesson
And it seems to be going pretty well!
Ah, but you have not seen the counterfactual.
Terry Pratchett, A Hat Full of Sky
This is a great tagline for the doctrine of Original Sin.
“Even if it’s not your fault, it’s your punishment.”
– Said Achmiz, in a comment on Slate Star Codex’s post “The Cowpox of Doubt”
The original quotation on LW.
I’m not sure that’s quite in the spirit of the thread rules, what with how closely tied Slate Star Codex is to the LW community. But it’s a good enough abuse of Solzhenitsyn that I’m upvoting it anyway.
Am I the only one who finds it annoying how the “do not quote LW rule” has been creeping into ever broader interpretations?
Hmm. It’s an interesting point.
I’m not entirely clear on the purpose of the rule. It makes sense to not just increase the redundancy of anything people have said in other threads that have already got a lot of attention, but I’m sure there’s plenty of interesting stuff buried deep in comment threads that haven’t got much light and might be worth sharing. Conversely, there will be some quotes here from outside LW/OB that a high proportion of readers have seen already.
So it’s definitely something that made sense when the LW/OB community was smaller and there wasn’t much good stuff that people weren’t seeing anyway, but perhaps it’s time to relax the rule a little bit, replace it with the substance.
I believe the purpose was to bring material to LW from outside rather than quoting each other (and especially, quoting Eliezer), to avoid an echo chamber effect. There was once an experimental LW Quotes Thread, but the experiment has not been repeated.
I don’t have a strong view about whether LW regulars posting on other LW regulars’ blogs should be excluded from the quotes threads, but I incline against the practice. It was a good quote though.
Which side do you incline against?
Against having such quotes.
I can’t comment on the size (so LW is growing?), but I have a tingling memory that long time ago (several years back) people did post LW quotes. Since LW doesn’t exist that long I suppose it was the case in its inception. I can’t say for sure, but actually Eugine’s post seems to suggest that as well; otherwise it wouldn’t have been “creeping into”. Either way, should be easy to check. I do, too, think it is worthwhile to put LW quotes. I remember (I do!) reading those and being led to read the original articles whence they came.
I don’t think LW/OB quotes were ever allowed, but MoR quotes used to be.
I think we may have cracked down on Hanson quotes too.
There was a separate thread for that for a while.
Jessica speaking to Thufir Hawat in Frank Herbert’s Dune
~J. Stanton, “The Paleo Identity Crisis: What Is The Paleo Diet, Anyway?”
But the answers might be specific to each individual because the biochemistry of humans is not exactly the same.
In that case, the questions have complicated answers. The best dieting advice might be “first sequence your personal microbiome then consult this lookup table...”
The important thing is not that the answers are complicated, but that the answers are different for different people. “Consult a lookup table” is not an answer, it’s advice how to get to one.
Individuals being different from each other shouldn’t necessarily diminish the significance of biochemistry. Biochemistry should explain not just our similarities but overarching principles that organize and explain the differences.
My point wasn’t that biochemistry is not important. My point was that the answers you get from biochemistry might be complicated and limited in application.
It not at all clear that someone who knows all the biochemistry will outperform someone who’s good at feeling what goes on in his body.
In the absence of good measurement instruments feelings allow you to respond to specific situations much better than theoretical understanding.
I am told that the natural feeling for gravity and balance is worse than useless to a pilot.
I am told this as well.
See also http://lesswrong.com/lw/1hh/rationality_quotes_november_2009/1ah9
It probably depends a lot of what “natural feeling” means to you. The way a Westerner who’s educated to sit on chairs for long periods of time in school deals with feeling gravity is far from natural.
If you interact with gravity all wrong and are suddenly put into a complicated situation while flying a plane as a pilot than that will reveal a lot of problems.
It would be interesting to compare different beginner pilots and the way they interact with gravity and see which one’s do a better job.
I think it’s because humans can easily confuse the pull caused by gravity with a pull caused by acceleration. We could tell the difference using the visual cues when we run or jump… but if you are sitting in the plane, the plane moves together with you, so all objects inside the plane are constantly giving you the wrong cues.
In other words, the plane is “far from natural”. Not being from West doesn’t help a lot here. Perhaps coming from an alien civilization where kids spend most of their time in flying saucers would make a difference.
It’s not just humans—the basic idea of general relativity is that (short of looking out or detecting tidal effects) it’s impossible in principle to tell them apart.
Running a lot is something quite natural for humans but not done much by Westerners. That means that the default mode of feeling pull is not well developed. Because it’s unclear what you are actually feeling it’s more difficult to adept to the changing meaning of the feeling.
I didn’t see much running in Japan nor China either. :)
Saying that for every elements in X A is true doesn’t that for all elements that aren’t in X A isn’t true. Logic doesn’t work that way.
My sentence doesn’t make statements about the amount of running being done in Japan or China.
Depending on the outcome specificied and the type of feelings attended to, of course.
Yes, being able to tell apart the feeling, that makes you crave sugar from the felling that tells you that you should eat some flesh to fix your B12 deficieny, isn’t easy.
Getting clear about the outcome that you want to achieve with your eating choices is also not straightforward.
Both are skills for which understanding biochemistry is secondary.
As far as I can tell, distinguishing between those sorts of feeling is a matter of accumulated experience. There aren’t classes of feelings, some of which are desires for things which are bad for you and others which are desires for what you need.
I’m not 100% sure because I’m not that good at making eating choicses but there are those people who make intuitive eating choices you wouldn’t eat sugared food but who eat mostly raw vegan and who their raw steak once a month to stock up on B12 when their body calls for it (or whatever the body actually calls for when he brings up the desire to eat flesh).
With cognitive thinking, there far- and near-thinking. I think that exists also for feelings. Fun would be a word that generally describes a near-feeling while life satisfaction refers to a more far-feeling.
A meditation is finished when you feel it’s finished. If you don’t have that feeling which can take years to develop you need a clock to tell you when 15 minutes are over because otherwise you might use it as a excuse to quit the meditation when things become really hard.
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.
Hans Moravec, Wikipedia/Moravec’s Paradox
The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived… As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.
Stephen Pinker, Wikipedia/Moravec’s Paradox
What was the ratio of phone time spent talking to human vs computer receptionists when Pinker published this quote in 2007? For that matter, how much non-phone time was being spent using a website to perform a transaction that would have previously required interaction with a human receptionist?
Pinker understood AI correctly (it’s still way too hard to handle arbitrary interactions with customers), yet he failed to predict the present, much less the future, because he misunderstood the economics. Most interactions with customers are very non-arbitrary. If 10% need human intervention, then you put a human in the loop after the other 90% have been taken care of by much-cheaper software.
If you were to say “a machine can’t do everything a horse can do”, you’d be right, even today, but that isn’t a refutation of the effect of automation on the economic prospects of equine labor.
Except that in exponentially-increasing computation-technology-driven timelines, decades are compressed into minutes after the knee of the exponential. The extra time a good cook has, isn’t long.
Let’s hope that we’re not still paying rent then, or we might find ourselves homeless.
G. K. Chesterton, attributed.
Upvoted. I would’ve preferred the following version:
Might someone offer an explanation of this to me?
On its own I can think of several things that these words might be uttered in order to express. A little search turns up a more extended form, with a claimed source:
Said to be by G.K. Chesterton in the New York Times Magazine of February 11, 1923, which appears to be a real thing, but one which is not online. According to this version, he is jibing at progressivism, the adulation of the latest thing because it is newer than yesterday’s latest thing.
ETA: Chesterton uses the same analogy, in rather more words, here.
Note that this accentuates the relevance of a detail that might be skipped over in the original quote- that Thursday comes after Wednesday. That is, this may be intended as a dismissal of the ‘all change is progress’ position or the ‘traditions are bad because they are traditions’ position.
Not to mention the people who think accusing their opponents of being “on the wrong side of history” constitutes an argument.
So you are not going to argue that history has shown that socialism has failed?
That’s using history as evidence. What I was complaining about is closer to the people who declare that all opponents of a change that they plan to implement (or at best have only implemented at most several decades ago) are “on the wrong side of history”.
I think you may not be interpreting the phrase “the wrong side of history” as people who say it mean it.
There a classic saying that ”
A new scientific truth does not triumph by convincing its opponents and making them see light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” Max Planck
Effectively there’s a position that’s obviously correct but there are also people who are just too hidebound and change averse to recognize it and progress can’t be made until they die off. But progress will be made because the position is correct. When you tell someone they are on the wrong side of history you are reminding them they are behaving like one of the old men that Plank mentions. Put another way, what it’s saying is “if you look at people who don’t come from the past and don’t have large status quo bias you will notice a trend”.
In physics, yes. In history / political science, no.
“Slavery is wrong” isn’t obviously correct?
I find this comment particularly ironic given your chosen username.
“War is wrong” isn’t obviously correct?
I think the majority of the population believes that there are valid reasons to start a war. R2P etc.
I was talking about war,not wars. Everybody would wish away war if they could. Many people think THIS war need to be fought.
I wouldn’t wish away war unless I also wished away the things we need to go to war for, in which case you could as easily say that I would wish away cancer treatments or firefighters.
People go to war because of war, because they have been attacked. That would get wished away as part of the deal.
Or they go to war less honorable reasons like grabbing resources, or making forcible converts to a religion.
Can’t see anything I’d want to keep.
Or because they are being mistreated by others in ways that don’t qualify as war.
Inter-state war is by far the least common type of warfare in the modern era, although the proxy wars growing out of the Cold War muddy the waters some. Civil and ethnic warfare is much more common, and I don’t think we can say that civil conflicts, at least, can always be described in terms of straightforward aggression and defense against aggression.
(Truthfully I wouldn’t say that for inter-state wars either, not all of them, but they’re a lot easier to spin that way.)
I was using wish away to mean magically get rid of. Unmagically getting rid of it requires unmagically fixing a lot of other things, which is why it hasn’t happened.
Magically getting rig of it strikes me as one of those wishes that will backfire horribly in one of several ways depending on exactly how the wisher defines “war”.
Depends. For starters are you counting revolutions and civil wars as “wars”?
The point being that you can’t infer that everyone believes in X in a society where X exists. They may dislike it but be unable to do anything about it.
I’m not making that argument. There polling out there that tells you what people like or dislike. I think that responsibility to protect (R2P) is accepted by a lot of people as a valid reason for military intervention.
Considering it was the norm for several thousand years of history and many philosophers either came out in favor of it or were silent … no, it’s not obviously correct.
There is obviously no one here who will disagree with it. But it is still a moral judgment, not a matter of fact.
Mencius Moldbug does argue that all moral changes after a certain point in time should be rolled back. That timeframe does include the abolition of slavery.
I don’t know whether there at the moment someone on LW willing to make the argument for slavery explicitly but you might find people who do have Moldbug’s position.
The last census shows a bunch of neoreactionaries.
A former poster here (known elsewhere on the net as “James A. Donald”) does disagree with it. He believes that slavery is the rightful state for many people. And for what it’s worth, he also believes that moral judgements are matters of fact, in the strong sense of ethical naturalism.
Where can I find evidence linking the sam0345 account to the identity James A. “Jim” Donald?
Somewhat laboriously, by searching LessWrong for his very first postings and working forwards from there, looking for my replies to him and he to me. I recognised him as James A. Donald as soon as he started posting here, from his distinctive writing style and views, which were very familiar to me from his long history of participating in rec.arts.sf.* on USENET. As evidence, I linked to other places on the net where he had posted views identical to what he had just posted here, expressed in very similar terms. He never took notice of my identification, even when replying directly to comments of mine identifying him, but I think it definite.
BTW, while “sam0345” is obviously not a real-world name, I have never seen reason to think that “James A. Donald” is. Searches on that name turn up nothing but his online activity (and a mugshot of an unprepossessing individual of the same name who served 35 years for forgery, and who I have no reason to think has any connection with him). I have almost never, here or anywhere else, seen him post anything personal about himself. He is American, and an Internet engineer, and that’s about it. And 10 inches taller than his wife, for what that’s worth. I have never seen anyone mention having met him. His ownership of jim.com is unusual, in that it goes back well before the advent of public Internet access and easy private ownership of domain names. Try getting a domain name that short and simple nowadays! They’re all taken.
Interesting! Before the great-grandparent I would have assigned a pretty low prior to sam being Jim; I never even considered the possibility explicitly. Now that I’m looking at it closely, sam does use a similar writing style. I’m updating substantially, and now believe there is a roughly 50-75% chance they’re the same person. Thanks for answering!
In which meaning do you use the word “correct”?
In which meaning do you use the word meaning?
Is this falsifiable?
Sure, just step back in time.
A bit less than two millenia ago one could have said “Effectively there’s a position—that Jesus gifted eternal life to humanity -- that’s obviously correct but there are also people who are just too hidebound and change averse to recognize it and progress can’t be made until they die off. But progress will be made because the position is correct.”
I was actually thinking of eugenics, which was once a progressivist “obvious correct thing where we just need to wait until these luddites die off until everything will be great” thing, until it wasn’t. Incidentally a counterexample to “Cthulhu always swims left” too.
It’s a case where “correct”, “right side of history” and “progress” dissociate from each other.
I think you could make a case for totalitarianism, too. During the interwar years, not only old-school aristocracy but also market democracy were in some sense seen as being doomed by history; fascism got a lot of its punch from being thought of as a viable alternative to state communism when the dominant ideologies of the pre-WWI scene were temporarily discredited. Now, of course, we tend to see fascism as right-wing, but I get the sense that that mostly has to do with the mainstream left’s adoption of civil rights causes in the postwar era; at the time, it would have been seen (at least by its adherents) as a more syncretic position.
I don’t think you can call WWII an unambiguous win for market democracy, but I do think that it ended up looking a lot more viable in 1946 than it did in, say, 1933.
Note terms like the third position or third way.
Seen by some as doomed by history, perhaps. The whole point of US liberalism as I understand the FDR version was to provide a democratic alternative; you may recall this enjoyed some success.
Indeed, many of the most prominent supporters of fascism came from the traditional left. Mussoloni was originally a socialist, Mosley defected from the Labour party, and they didn’t call it “national socialism” for nothing. In fact part of the reason why communists and fascists had such mutual loathing (aside from actual ideology) was that they were competing for the same set of recruits. Then again, Quisling and Franco especially were firmly in the right-wing camp.
With such concordance from all sides of the political spectrum it’s easy to see how one could conclude that totalitarianism was the next natural stage in history.
Interestingly, if you press the people making that claim for what they mean by “left”, their answer boils down to “whatever is in Cthulhu’s forward cone”.
For a more modern example, wouldn’t that have been said for marijuana a few decades ago?
Everyone expected that once the older people who opposed marijuana died off and the hippies grew into positions of power, everyone would want it to be legal. That didn’t work out. (The support for legalization has gone up recently, but not because of this.)
Guilty as charged.
The point is that decades ago, illegal substance use was popular among people of college age. Yet as those people grew up, they stopped using the substances and did not, once they were in power, try to make them legal. I’m not comparing young people today versus older people today, I’m pointing out that all those marijuana smokers from the 1960′s and 1970′s didn’t grow up and legalize pot. I’m sure back then if you went onto a college campus you’d have heard plenty of sentiment of “when the old fogies die off and we’re running the country, we’ll legalize weed”. The old fogies died off; the people from the 60s and 70s grew up to rule the country, and… it didn’t happen.
The peak year for the popularity of marijuana use among young adults (18-25 years old) was 1979, and it was still less than half.
According to your link, a poll in 1973 shows 43% of students having tried it with 51% in 1971. That 1979 figure is for people who are currently using it. I suspect the percentage that have tried it, rather than the percentage of regular users, is a closer fit to the percentage who would have supported legalization back then.
Furthermore, even if the percentage was under 50%, it’s clear that once they grew older they didn’t exert the massive influence over marijjuana policy that would have been expected. If 30% or 40% of 25-40 year olds actively support something, even if they are not a majority, that’s going to be very prominent in politics, and heavily drive the discourse, and that just hasn’t happened. (And even 30% or 40% might be enough to pass legalization considering that a lot of the remainder are probably just neutral on the issue.)
Not really. US politics is a lot about what the kind of people who donate to political campaign thinks about issues. The Koch brothers are for example old people supporting marijjuana legislation.
It’s not unheard of for people who’ve recently tried various substances to nonetheless support stricter restrictions on them. The usual narrative goes something like “I can handle this, but there are lots of people that can’t, and we have to keep it out of their hands”, though the people in question vary—drawing class, demographic, or cognitive lines is common.
There can be other ulterior motives, too. In the early 2000s, a few marijuana growers in Northern California were among the opponents of a ballot proposition that would have legalized it in the state—because legalization was expected to harm their profit margins, doing more damage than than removing the chance of arrest would have made up for.
Or, alternately, “It was a mistake for me to do it, and I was lucky to get away without punishment, but legalizing would encourage other people to make the same mistake.” I seem to recall a few U.S. politicians on both sides of the aisle saying things of this nature.
I would believe that people who used drugs back then would say this now. I find it hard, however, to believe that people who used drugs back then would have said it back then, and the point is that people back then thought they would legalize weed once the old fogies died off.
How do you know that this wasn’t the cause?
Because as army1987 points out, legalization is supported by the young, not by people who were young in the 1960′s and 1970′s.
Was the forceful kind ever an obviously correct/leftist position? To my mind non-violent eugenics is still obviously the correct thing where we just need to wait until the luddites die off—it’s just the association with the Nazis has given ludditery a big (but ultimately temporary) boost.
http://en.wikipedia.org/wiki/Compulsory_sterilisation_in_Sweden
Do you actually mean non-coercive? There are great many ways to apply pressure on people without actually getting violent....
I suspect it is falsifiable. I might unpack it as the following sub claims
1 Degree of status quo bias is positively correlated to time spent in a particular status quo (my gut tells me there should be a causal link, but I bet correlation is all you could find in studies)
2 On issue X, belief that X[past] is the correct way to do X is correlated with time spent living in an X[past] regime.
2.5 Possibly a corollary to the above, but maybe a separate claim: among people who you would expect to have the least status quo bias position X[other] is favored at much higher rates than among the general population
For most issues 2 and 2.5 can probably be checked with good polling data. Point 1 is the kind of thing its possible to do studies on, so I think its in principle falsifiable, though I don’t know if such studies have actually been done.
2) is also what you would expect to see if X[past] was indeed better than X[other].
2.5) Not having status quo bias isn’t equivalent to being unbiased. A large number of the people that are least likely to have status quo bias are going to be at the other end of the spectrum—chronic contrarians.
Note that which X is better may depend on circumstances (e.g. technological level).
In politics, no position is obviously correct. Claiming that one’s own position is obviously correct or that history is on our side is just a way of browbeating others instead of actually making a case.
Claiming that the opponents of some newly viral idea are “on the wrong side of history” is like claiming that Klingon is the language of the future based on the growth rate when the number of speakers has actually gone from zero to a few hundred.
No—you are telling them. To remind someone of a thing is to tell them what they already know. To talk of “reminding” in this context is to presume that they already know that they are wrong but won’t admit it, and is just another way of speaking in bad faith to avoid actually making a case.
One’s person status quo bias is another person’s Chesterton fence. The quote from which this comment tree branches is from Chesterton.
I strongly agree. It’s possible that history has a side, but we can hardly know what it is in advance.
I don’t think you agree. I think Eugine has a problem with the idea that just because an idea wins in history doesn’t mean that’s it’s a good idea.
Marx replaced what Hegel called God with history. Marx idea that you don’t need a God to tell you what’s morally right, history will tell you. Neoreactionaries don’t like that sentiment that history decides what’s morally right.
I am not a neoreactionary and I think the sentiment that history decides what’s morally right is a remarkably silly idea.
You have to compare it to the alternatives. Do you think it’s more or less silly than the idea that there a God in the sky judging what’s right or wrong?
Marx basically had the idea that you don’t need God for an absolute moral system when you can pin it all on history with supposedly moves in a certain direction. You observe how history moves. Then you extrapolate. You look at the limit of that function and that limit is the perfect morality. It’s what someone who got a rough idea of calculus does, but who doesn’t fully understand the assumptions that go into the process.
In the US where Marx didn’t have much influence as in Europe there are still a bunch of people who believe in young earth creationism. On a scale of silliness that’s much worse.
Today the postmodernists rule liberal thought but there are still relicts of marxist ideas. Part of what being modern was about is having an absolute moral system. Whether or not those people are silly is also open for debate.
Sure. Let’s compare it to the alternative the morality is partially biologically hardwired and partially culturally determined. By comparison the idea that “history decides what’s morally right” is silly.
Yep, he had this idea. That doesn’t make it a right idea. Marx had lots of ideas which didn’t turn out well.
Oh, so—keeping in mind we’re on LW—the universe tiled with paperclips might turn out to be the perfect morality? X-D
And remind me, how well does extrapolation of history work?
Do you, by any chance, believe there is a causal connection between these two observations that you jammed into a single sentence?
Since culture evolves with history there is a lot of overlap between culture determining moralty and history determining morality.
What’s the overlap between two empty sets?
There’s no culture and no history?
Oh yes, there is culture, and there is history, and there is an overlap.
Now work out what two sets I am implying are empty.
OK. You’re one if the people who think that morality is arbitrary because your training did not equip you to think about it as non arbitrary.
We didn’t talk about right or wrong but silly.
Let’s do what partially biologically hardwired and partially culturally determined is not exactly the battle cry under which you can unite people and get them to adopt a new moral framework. It also has the problem of not telling people who want to know what they should do what they should do.
Yes, I do think that Marxism and Socialism has a lot to do with spreading atheism in Europe. Socialist governments did make a greater effort to push back religion and make people atheists than democratic governments did.
If I hear Dawkins talk how it’s important that atheists self identify as being atheists to show the rest of America that one can be an atheist and still a morally good person, than that does indicate to me a problem of American culture that’s largely solved in Europe. Socialist activism has a lot to do with why that’s the case.
Dawkings → Dawkins
Thanks. The fact that I made that error is pretty interesting to me. Someone else used the Dawkings spelling a few days ago on LW. I felt that it was wrong and looked up the correct spelling to try to be sure.
Somehow my brain still updated in the background from Dawkins to Dawkings.
Promoting a century-and-a-half-old wrong idea looks pretty silly to me. You want to revive phlogiston, too, maybe?
That’s a good thing. I am highly suspicious of ideologies which want people to adopt new moral frameworks, especially if it involves battle cries.
That’s a feature, not a bug.
Oh yes, they certainly did. I take it, you approve of these efforts?
That question indicates being mindkilled. I happen to be able to discuss issues like that without treating arguments as soldiers.
Discussing cause and effects is hard enough as it is without involving notions of approval or disapproval.
The implication that somehow socialism isn’t responsible for spreading atheism in Europe because socialist used some immoral technique is a conflation of moral beliefs with beliefs about reality.
It seems to me that you two are talking past each other. Here’s what I hear:
ChristianKI: “Socialist movements and governments did successfully promote atheism and materialism in the populations of Europe. This is why Europeans do not tend to believe, as Americans do, that atheists are incapable of being moral.” (This is a descriptive claim about history and public opinion.)
Lumifer: “We should not advocate socialism as a way of promoting atheism and materialism, because socialism is awful and Marxist ideas of historical progress are silly.” (This is a normative claim about advocacy.)
You’re using “socialism” vaguely. Iron curtain socialism was awful. North-western European social democracy is not.
What do we get if we Taboo socialism?
Detail
I haven’t said anything about morals. In particular, I haven’t labeled any actions as immoral. I just inquired whether you approved of the efforts that the socialist governments have made in reality in the XX century to spread atheism.
Moreover, we are already past the question of whether the socialist governments made “a greater effort to push back religion and make people atheists”—we know they did—the issue now is the cost-benefit analysis of these efforts. You clearly like the outcome, so do you think the price was worth it? This is what I mean by the question about whether you approve.
I do approve of democratic socialism.
I’m heavily opposed to what currently happens in France when it comes to fighting religion.
But I guess both claims won’t tell the average person here where much because the political background of European politics isn’t that clear in English speaking forum.
The question wasn’t which political system you approve.
The question was whether you think the outcome of more atheists in Europe was worth the cost incurred during the efforts of the socialist governments to suppress religion and promote atheism.
I’m living in a country in which the people who want socialism who had the most political power favor democratic socialism over communism.
In Germany you had a split in the left. One half thought that you need a revolution to achieve the goal of socialism and the other half thought that you can work within the democratic institutions to achieve the goal of socialism.
I haven’t meet any young earth creationists in Berlin or for that matter people who doubt the theory of evolution so I’m completely happen with the state of affairs where I live. No catholics bombing protestants either.
On the other hand I don’t approve of the kind of policies that exist in France or Soviet Russia. I’m not familiar enough with Swedish policies to tell you whether I approve of them.
This is a bit of a sideline, but if you’re talking about the Troubles in Northern Ireland, I think modeling it as a religious conflict is the wrong way to go. The impression I get is more of religion as a shibboleth for cultural and political ties than the other way around.
Lucky you X-D
Right. Instead you had the Baader-Meinhof gang. They wanted socialism, too, didn’t they?
There advocated way of getting there wasn’t the “way through the institutions” but “revolution”. There are Marxist arguments that revolution is the only way and that it’s not possible to change the system from the inside.
According to our university constitution students are supposed to vote in an election for a 5 person group to represent the body of students of a university department. At our university the students of the political science department don’t like this.
The elected 5-person body doesn’t constitute itself and the decisions are rather supposed to make by a self governed open body in which everyone who wants can speak and that makes decisions via “consensus”.
I don’t see myself in that tradition or have any loyalty to that fraction. As far as current affairs go, I would want liquid democracy for those student institutions with some elected persons taken representative roles and not “consensus” style democracy.
Because Socialists are so well-known for their morality. Seriously, in the US Socialists and Marxists are the standard examples of how atheism causes people to lose their morals.
I cannot interpret that comment, because I cannot understand how you are using “socialism” or “morality. Or how ironic you’re being. Where I come from, the term “Christian Socialist” can be taken with a straight face.
I think they’re both quite silly. Also, the fact that many people believe in God as a source of morality, is itself a reason why history (i.e. the actions of those people) is a bad moral guide.
Surely most pre-modern philosophers also had absolute moral systems?
Beforehand there was the idea that God’s simply beyond human comprehension. One day he tells the Israelis to love their neighbors and the next he orders the Israelis to commit genocide.
You were supposed to follow a bunch of principles because those came from authoritative sources and not because you could derive them yourself.
If you read Machiavelli, he’s using God as a word at times when we might simply use luck today. Machiavelli very much criticizes that approach of simply thinking that God works in mysterious ways.
Greeks and Romans had many different Gods and not one single source of morality.
Of course absolute morality is not all the modernism is about.
I was thinking about classic and medieval Christian philosophy, which tied morality to an unchanging (and so absolute) God.
As an aside, when the Israelis were ordered to love their neighbors, the reference was to the neighboring Israelis and peaceful co-inhabitants of other tribes. Jews were never told by God to love everyone or not to have enemies; that is a later, Christian or Christian-era idea.
But still a mysterious God who’s so complicated that humans can’t fully understand him so the should simply follow what the priest who has a more direct contact to God says. Furthermore you should follow the authority of your local king because of the divine right of kings that your local king inherited.
The idea that you can use reason to find out what God wants and then do that is a more modern idea.
Things switched from saying that if the telescope doesn’t show that planets move the way the ancestors said they are supposed to move, then the telescope is wrong to the idea that maybe the ancestors are wrong about the way the planets move. The dark ages ended and you have modernity.
I don’t have much to say about the actual point you’re making, but you’ve been setting off alarm bells with stuff like this:
What’s your background on the history of this period? And on the philosophy of Marx and Hegel? The things you are saying seem to me to be false, and I want to check if the problem isn’t on my end.
What do you mean with this period?
I don’t think that modernity started with Hegel but with people like Machiavelli with is around ~1500. Hegel and Marx on the other hand did their work in the 19th century. I did read Machiavelli’s The Prince cover to cover.
In the case of Marx and Hegel I’m a German and in this case speaking about German philosophers. That means I have been educated in school with a German notion of what history happens to be. I don’t see political history in the Anglosaxon frame of Whig vs Tory.
I did spent a bunch of time in the JUSOS with is the youth organisation of the German SPD and the abbreviation roughly translates into Young Socialists in the SPD. I therefore did follow debates about whether socialism as end goal should be kicked out of the party program of the SPD or be left in.
Lastly I did a lot of reading in political philosophy both primary and secondary sources. Most of it a while ago.
But one sentence summaries of complex political thoughts are by their nature vague. Of course Hegel already had the notion of history and me saying replaced might give the impression that he didn’t. But Hegel did have God and Marx did not.
Just out of curiosity, what was the result of those debates?
It’s still in there but more for symbolic reasons. Party leadership didn’t really want it but the party base did. The relevant phrase also happens to be democratic socialism. Meaning that the goal is economic equality but representative democracy and not a bunch of soviets and “consensus” decision making.
In practice the party policies under Schroeder were more “third way” and as a result they wanted to “update” the party program to reflect that policy change.
What do you mean by “economic equality”? Do you mean that everyone should have the same amount of money/resources? (This is not a stable state of affairs if people then proceed to engage in commerce).
If you have a government which constantly redistributes money you could hold it constant if you wanted to do so. But the people with whom I spoke usually don’t go that far. Concerns are rather that everyone has access to a “living wage”.
Defining how exactly the end state will look like isn’t that much of a concern if you can decide whether or not you move in the right direction. There the feeling that third way policies of cutting government pensions don’t go in that direction.
Yes, but that’s not exactly compatible with anything resembling freedom.
The problem is what’s considered a “living wage” changes with changes in society.
It is a concern if you want to evaluate whether you should even be trying to move in that direction.
What is a living wage changes with changes in society, and that isn’t obviously a problem. If society becomes richer, people expect higher wages, and if society becomes richer, it can afford them. Depending on the quantities.
Amazingly enough, freedom supporting policies can negatively impact equality. To put it another way, if there were no conflicts between values, there would be no politics. To put it a third way, you keep writingas though you are the Tablet, and have the One True Set of Values inscribed in your brain.
Christian mentioned having the government constantly redistributing money as a possibly desirable end state. I was pointing out one of the implications of said end state.
Also I’m getting increasingly frustrated at people, yourself included, who keep trying to pass off their false beliefs about the nature of the world as different preferences.
In particular, to use the economic equality example, if you constantly redistributed money to keep everyone equal, as I mentioned it would destroy anything resembling freedom. But suppose you claim to have a utility function that puts no value on freedom. Well, another consequence is that it would destroy the motivation for people to engage in productive work (if the benefits would just get redistributed) so you’d wind up with a bunch of equally starving people. Assuming, that is, that this redistribution was somehow magically enforced, more realistically you’d wind up with everything in the hands of the redistributors.
Rousseau’s “The Social Contract” begins with the words:
I don’t think that any modern person on the left is as direct as that when it comes to freedom, but in European political thought the idea of the Social Contract is quite central.
The idea is that in the end state people would be motivated to work as a way of self actualization and don’t need financial incentives to do work. Star Trek has characters who work without getting payed to do so.
The observation that today many people need money to be motivated to work doesn’t mean that will always be true in the future and that we shouldn’t work on moving society in that direction.
The idea of an end state doesn’t mean something that can be reached in 10 years a state that can take quite a while to reach.
Could you taboo what Rousseau means by “master” and “slave” in that quote. As is, to me it sounds like deep wisdom attempting to use said words in some metaphorical way that’s not at all well-defined. Also I don’t see what this has to do with the subject.
The problem is that the work that’s self-actualizing is not necessarily the same as the work that’s needed to keep society running. In other words, attempting to run society like this you’d wind up with a bunch of (mediocre) artists starving and suffering from dysentery because not enough people derive self-actualization from farming or maintaining the sewer system. Historically, many attempts by intellectuals to create planned communities fell into this problem.
Fictional evidence.
Rousseau writes his central work to justify that men is everywhere in chains. Rousseau attempts to legitimize the Social Contract that takes away men’s natural freedom.
Rousseau later argues that man get’s new freedoms in the process, but he’s not shy in admitting that men loses his natural freedoms by being bound in the Social Contract.
The full text is readily available online. A “master” is someone with the power to tell others what to do and be obeyed; yet these masters themselves obey something above themselves (laws written and unwritten). Rousseau’s answer (SPOILER WARNING!!) is the title of his work. (To which the standard counter-argument is “show me my signature on this supposed contract”.)
A few more Rousseau quotes:
...
...
He is arguing here against theories whereby sovereignty must consist of absolute power held by a single individual beyond any legitimate challenge, his subjects having no rights against him. For Rousseau, sovereignty is the coherent extrapolated volition of humanity—or in Rousseau’s words, “the exercise of the general will”. Rousseau’s sovereignty is still absolute and indivisible, but is not located in any individual.
One can cherry-pick Rousseau to multiple ends. Here’s something for HBDers:
Libertarians may find something to agree with in this:
But to know what Rousseau thought, it is better to read his work.
Here is a decent debunking of the notion that modern society is based on a social contract. The basic argument is that if one attempts to explicitly right down the kind of contract these theories require, one winds up with a contract that no court would enforce between private parties.
More generally, Nick Szabo argues that the concept of sovereignty is itself totalitarian.
I agree with that.
It certainly is. Where does that leave FAI? A superintelligent FAI, as envisaged by those who think it a desirable goal, will be a totalitarian absolute ruler imposing the CEVoH and will, to borrow Rousseau’s words, be “so strong as to render it impossible to suspend [its] operation.” Rather like the Super Happies’ plan for humanity. The only alternative to a superintelligent FAI is supposed to be a superintelligent UFAI.
The open source movement is a better example of voluntary word than star trek.
In this case I don’t think so.
I didn’t want to give an example of work done as a volunteer but an example of a futuristic society where people don’t work for money.
The Open Source movement also also a bunch of different people doing things for various reasons and incentives.
People in Star Trek work sometimes for patriotism, sometimes for gold-pressed latinum, but mostly toward whatever the plot says they need to be doing. I foresee problems with using narrative tension as a medium of exchange.
I actually agree that running for 100% equality would likely result in 0% freedom.
For my money that is an extreme illustration of “you can’t satisfy all values simultaneously” , not of “left bad”.
Christians absolute egalitarianism is view I have never heard articulated before. It seems to be the mirror image of anarcho-capitalism, the philosophy that guns for 100% freedom.
To me, it’s symmetric.
To you there is apparently a “side” that is in contact with reality, and a side that isn’t.
Yes, there are a lot of things that would go wrong, to the average utility function, with absolute egalitarianism . Ditto for absolute libertarianism. But you never mention that.
It’s an open question whether a given extremist, of any stripe, is someone who has (1) a one-sided utility function, (2) who wrongly thinks that an average, mixed UF can be satisfied by extreme policies.
As such, you don’t get to assume that (2) is true of anyone in this discussion.
It wouldn’t result in much equality either. (Unless you mean equality in the sense that everyone is equally dead, which is a possible if extreme outcome.)
I also never called absolute anarcho-capitalism (I assume that’s what you mean by “absolute libertarianism”) as a desirable end-state.
The problem is that as I pointed out the way these people pursue their one-sided goal won’t even maximize the one-sided utility function.
Edit: Speaking of freedom and equality don’t you also want a term for prosperity in there somewhere?
Or wellbeing, since dollars aren’t utilons.
I don’t define prosperity in terms of dollars.
If you want to have it articulated in a bit more detail Zeitgeist Appendum can give you an impression. With 5 million youtube it there are quite a few people on the internet who profess to follow that ideology.
According to it we need a central computer who tells everyone what work to do. People will do what the computer tells them because their education teaches them the value of following what the computer tells them, so perfectly that everybody just does what’s in the “public interest” and follows the directions of the central scientific computer program.
Because there won’t be money anymore, nothing will stop the digging of intercontinental tunnels for transportation needs so that you don’t need airplanes.
I have meet multiple people who believe that framework. Fortunately people outside of the political process where they won’t do much harm. Unfortunately a bunch of them are smart, so intelligence doesn’t seem to protect against it. One of them ranks quite well in debating tournaments.
Wow, there so many things wrong with this proposal that I’ll just mention the one that disgusts me on a visceral level. One effect of this scheme (if it could somehow be made to work) is that there is a certain organ that consumes nearly one quarter of the body’s energy that is now completely vestigial.
I can describe ideas without them being mine. In this case we are speaking about ideas in the party program of the SPD.
Is this line of conversation still “just curiosity” about the results of SPD debates, or are you trying to bait an argument?
I’m trying to figure out what Christian, and more generally the typical German, mean by “socialism” these days. Does it have a more moderate end goal then the older socialists, or do they have the same end goal and have simply decided to approach it more slowly.
Thanks, that’s helpful.
For Hegel and Marx history is the process of change.
Both the amounts of Gods per person and the percentage of people who believe went down over time. Thus history favors atheism.
I don’t see why the ‘amount of Gods per person’ is a valid metric for anything. Progression from poly- to monotheism doesn’t imply a future progression to atheism.
The actual percent of atheists in society has indeed increased over time, but it’s never been significantly above 10% worldwide and it’s not clear that’s it’s rising right now (Wikipedia source). It’s hardly strong enough evidence to conclude that a majority of humanity will be atheistic one day. Other religions surely exhibit or previously exhibited rising trends at least as strong.
In general, neoreactionaries seem to have cribbed this position from Herbert Butterfield’s critique of what he called the “Whig Interpretation of History”. Butterfield was not himself a neoreactionary, and infact warned against the trap that many neoreactionaries fall into: that of thinking that just because Whig histories are invalid, that this somehow makes Tory histories valid.
There are (at least) two things wrong with “the right side of history”. One is that we can’t know that history has a side, or what side it might be because a tremendous amount of history hasn’t happened yet, and the other error is that history might prefer worse outcomes in some sense.
I find the first sort of error so annoying that I normally don’t even see the second.
My impression is that Eugene is annoyed by both sorts of error, but I hope he’ll say where he stands on this.
There’s a third thing wrong with it: generally, people use the phrase in order to praise one side of some historical dispute (and implicitly condemn the other) by attributing to them (in part or in whole) some historical change that is deemed beneficial by the person doing the praising. The problem with this is that usually when you go back and look at the actual goals of the groups being praised, they usually end up bearing very little relation to the changes that the praiser is trying to associate them with, if not being completely antithetical. Herbert Butterfield (who I posted about above) initially noticed this in the tendency of people to try to attribute modern notions of religious toleration to the Protestant reformation, when in fact Martin Luthor wrote songs about murdering Jews, and lobbied the local princes to violently surpress rival Protestant sects.
What’s the precise sense of “attribute” in that claim? It’s not obviously implausible to claim that the more groups are competing with other, the less likely it is that any one can become totally dominant, and so the more likely it is that most of them will eventually see mutual toleration as preferable to unwinnable conflict. This doesn’t have to be an intended effect of the new sects to end up being an actual effect.
I hadn’t even thought of the first objection, possibly because I stopped considering “what side history is on” a useful concept after noticing the second one.
Speaking of which, let’s see what history has to say about Marx. It would appear that the Marxist nations lost to a semi-religious nation. Thus apparently history has judged that the idea that history will tell you what is right to be wrong.
I’m very far from being a reactionary or neoreactionary, but I also don’t put much moral weight on history—that is, on what most other people come to believe.
For one thing, believing that would mean every moral reformer who predicts for themselves only a small chance of reforming society, should conclude that they are wrong about morals.
If you’re on the winning or ascending side, you have more arguments in your favour..at this point in history,where democracy and it’s twin, rational argument, reign. That doesn’t add up to being right because epistemology, ie styles of persuasion, have varied . To know the right epistemology,you need...epistemology. That’s why philosophy difficult.
Meaning: You can’t spot a trajectory in while you’re half way along it?
Meaning: You can,but it doesn’t mean anything epistemologicaly?
Considering how many centuries it took humanity to get from its first curiosity about how things work to predicting the trajectory of a falling rock (the irony of your handle piles higher and higher), predicting trajectories in history is a fool’s task. How many predicted the Internet? How many predicted the end of the Soviet Union? How many can predict developments in Ukraine?
“History is on our side” is not an argument, but a cudgel.
Yep. It’s nothing but a minor variation on “God is on our side!” X-D
Don’t use it then. :-)
Arguing about preferences (=opinions, =values) is pretty pointless.
Yuval Levin in the National Review
To the extent that we can overcome our current limits, we have to understand them first. We should beware false humility and rationalization of existing limits (e.g. deathism).
-- Daniel Dennett, Intuition Pumps and Other Tools for Thinking
Are we sure about this? Einstein’s idea of riding along with a light beam was super-useful and physically impossible in principle. Whereas the experiment I just thought of where I pour my cup of tea on my trousers I can almost not be bothered to do.
Ceteris paribus, then. On average, a thought experiment along the lines of “what if I poured this stuff on my trousers” is of much more practical use and tells you much more about reality than a thought experiment along the lines of “what if I could ride around on [intangible thing]”. The most realistic thought experiments are the ones we do all the time, often without thinking, and which help us decide, for example, not to balance that cup of tea right on the edge of the table. Meanwhile, only very clever scientists and philosophers with lots of training can wring anything useful out of really far-out “what if I rode on a beam of light”-type thought experiments, and even they screw it up all the time and are generally well-advised not to base a conclusion solely on such a thought experiment. As I understand it, Einstein’s successful use of gedankenexperiments to come up with good new ideas is generally considered evidence of his exceptional cleverness.
(note: I know very little about this topic and may be playing very fast and loose. I think the main idea is sensible, though)
This is funny. Until I read your comment, I was misreading the original quote; I didn’t notice the “inversely” part. I was implicitly thinking that the quote was claiming that the farther the thought experiment is from reality, the more useful it is. I guess my physicist biases are showing.
I think that’s my point! It sounds just as profound without the ‘inversely’.
Nassim Taleb
This seems false in physics. Prestige of your institution matters. Prestige of the journal matters, too. Arxiv is fine, Physical Reviews is better, PRL is better yet. Nature/Science is so high, if you publish something that is not perceived as top-quality, you may get resented by others for status jumping. And there are plenty of journals which only get to publish second- and third-rate results.
Of course, the usual countersignaling caveat applies: once you have enough status, posting on Arxiv is enough, you will get read. Not submitting to journals can be seen as a sign of status, though I don’t think the field is there (yet).
My understating is that this effect is a lot smaller in physics than in the humanities.
By that standard, all academic disciplines are BS disciplines.
I believe that is the intended meaning, yes.
Can’t be. You can’t draw a distinction within a category by separating it into two subcategories one of which is empty.
The category being separated is “disciplines”, which divides into “BS” and “non-BS”. “Academic” disciplines are thus a further subcategory of “BS” disciplines.
Actually, “academic” disciplines would probably be a subcategory of “disciplines” which is largely but not entirely subsumed by “BS” disciplines, but I don’t usually demand that level of precision from witticisms.
[For the record, separating a category into two subcategories and proving one of them empty is just another way of proving the original category is identical with the non-empty subcategory. It is, indeed, valid from a technical perspective.]
You can, though it’s usually useless; but it also depends on whether that subcategory is always necessarily empty or it happens to be empty now but in principle it could be non-empty.
(But it’s still a fallacy of grey: even if all academic disciplines were, in fact, BS disciplines, some disciplines may still be less BS than others.)
I think, by this standard, law is a BS discipline. But I’m not sure what to make of that.
Well—law is, in a strict sense, entirely about convincing other humans that your interpretation is correct.
Whether or not it actually is correct in a formal sense is entirely screened off by that prime requirement, and so you probably shouldn’t be surprised that all methods used by humans to convince other humans, in the absence of absolute truth, are applied. :)
Would that include drafting a fire code for buildings? Would it include negotiating a purchase and sale agreement for a business? Would it include filing a lawsuit for unpaid wages? Would it include advising a client about the possible consequences of taking a particular tax deduction?
It’s hard to see how it would, and yet all of these things are regularly done by lawyers in the course of their work.
Those are, indeed, all examples of persuading human beings.
The other two are excellent points.
“persuading human beings” is not exactly the same thing as “convincing other humans that your interpretation is correct.”
Besides, in negotiating an agreement much of the attorney’s job consists of (1) advising his client of issues which are likely to arise; (2) helping the client to understand which issues are more important and which are less important; and (3) drafting language to address those issues. Yes, persuasion comes into it sometimes, but it’s usually not primary.
Filing a lawsuit for unpaid wages can be seen as persuasion in a general sense. If Baughn wants to claim that in a strict sense, litigation is about getting other people to do stuff, then I would agree.
Thank you.
In a narrow, rather than a strict sense. In that same narrow sense:
science is about convincing other humans that your experiments are correct
art is about convincing other humans that what you have made is art
parenting is about convincing other humans that you are a good parent
working for a living is about convincing other humans to pay you a living
competitions are about convincing other humans that you have won
teaching is about convincing other humans that you are teaching
being intelligent is about convincing other humans that you are intelligent
living is about convincing other humans that you are not yet dead
… and the best way to do that is, in theory, to do good experiments. Hence replication and so forth. That’s the basic idea of Science. (Alarmingly, some modern “scientists” have indeed been found cutting out the middle-man, as it were.)
Yes, this seems like a reasonable assessment. Some people have other goals in influencing the minds of their viewers, although this tends to edge into advertising etc.
No, this is only the lower bound that allows you to retain a child. Most of the rewards etc. of parenting are unrelated.
Definately.
This seems quite analogous to law, if the layer is themself the defendant.
In practice, sadly, it is. The incentive structure for teachers is pathetic, because no-one actually cares about it..
Not sure what to make of these last two, they don’t seem remotely analogous to the OP.
More seriously, law is also about predicting what those humans will be most easily convinced of.
What if you convince everyone that you’re a good parent while poisoning your child?
And everyone else can believe you’re dead and you can still be alive. In fact sometimes in order to live you have to convince everyone you’re dead.
Quite. Those were all intended to be bad arguments. Bad at a caricature level of badness. But Poe’s law, I guess.
Oops, my fail. I thought you were saying ‘there’s nothing special about the law’.
But surely art really is about convincing other human beings that you’ve made art. What on earth else is going on?
And I reckon that there are aspects of law that aren’t covered, but a barrister who can’t convince is completely useless in court.
Off to update my estimate of my own written-irony perception skills.
This is the sort of question that you have to already know the answer to, to be able to ask. I won’t attempt a definition, but as we all know, it involves such things as “the creation of things of beauty”, “the expression of a truth that nothing else can express”, and so on. That is what art is. We all know that that is what art is.
But for purposes of contradiction, suppose otherwise. Suppose art was entirely about convincing people that you have made art. Then the statement is a definition of art as being the fixed point of the formula “X is about convincing people you have made X”. What in this formula picks out the class of works that, when we look at the real world, we see everyone calling art? There is nothing. If this is truly a statement of everything that art is, we should be able to insert a made-up name in the definition and convey the same information: “pightlewarble is about convincing people you have made pightlewarble”. The fact that the revised sentence conveys nothing, yet “art is about convincing people you have made art” conveys something, demonstrates that the latter only communicates something because we already know something about what art is.
When such a sentiment is expressed, what it is intended to communicate is a criticism of art as practiced in the speaker’s time and place. The claim is that what is being produced is not art, and that it fails to be art precisely because its creators have concerned themselves with nothing more than getting an artistic reputation among a similarly corrupt audience, and have failed to aim at making art at all. The “art” that the sentence is about is being asserted to be not art. The real meaning is the opposite of the literal meaning.
A similar analysis applies to every one of the examples I gave. All of them, when seriously uttered, mean the opposite of their literal reading.
Of course, the artist wants an audience, the lawyer must persuade the court, and so on. But these are not the terminal goals of the activities, and to take them for such is wireheading.
Here is another example. Tomorrow I will travel some 300 miles to Glasgow. How will I know when I have arrived? Well, if all goes to plan, the train will be pulling into Glasgow at about when the timetable says, there will be an announcement on the train, I will see signs on the platform saying “Glasgow”, I’ve been there several times before so it will probably look familiar, and so on. (Btw, I also expect to be both busy and offline most of the weekend.)
So is travelling to Glasgow entirely about convincing myself that I have travelled to Glasgow? Of course, I have to reach the state of being convinced, just as the lawyer must convince the court etc. But the real goal is not to merely be convinced that I am in Glasgow, but to actually be there. In the real world of here and now (a place not as well frequented by LessWrongers as it might be) the only way of achieving the perception is to achieve the reality. Were this not so, my ability to achieve my real goal would be compromised, and I would have to find some other way of detecting when the goal was achieved. Compare the task of flying an aircraft in turbulence and poor visibility. A pilot who thinks that keeping the aircraft level is about feeling that the craft is level will crash. He has to trust the instruments above his physical sensations, and the instruments are there because the sensations are unreliable under those conditions.
Would you get on a train, if maintenance of the railway system was literally entirely about filling in the forms saying that the maintenance had been done?
Is it, really? Have you thought about art as pure self-expression? Art as a way to attempt to magically manipulate reality? Even art as a way to make something look pretty?
Then you are a successful Christian Scientist, say. Those are all descriptive claims, not normative ones.
Interesting. There are famous cases of self-taught lawyers from previous centuries.
I wonder if this says something bad about the modern legal system. Maybe the modern legal system is less about making arguments based on how the law works (or should work) than about the lawyer signaling high status to the judge so that he rules in your favor.
There are famous cases of self-taught specialists in scientific fields, too. There aren’t so many of them nowadays. That’s because both the law and science are in a state where a practitioner must know a lot of details that didn’t exist as part of the field in earlier days.
I don’t think I have good reason to think this is the case. At any rate, it’s clear enough that the prestige bit seems to come in heavily in hiring decisions, so let’s just talk about that. How, in the ideal case, do you think lawyers would be evaluated for jobs? Off hand, I can’t think of anything a lawyer could produce to show that she’s a good hire.
I’m not a lawyer, and English law is different from American, but I reckon that I can tell the difference between good and bad lawyers by talking to them for a while about various cases in their speciality and listening to them explain the various arguments and counter-arguments.
I’ve heard people who make a good living from the law make incoherent wishful-thinking type arguments about which way a case should have gone, when I can see perfectly well how the judge was compelled to the conclusion that he came to. I wouldn’t want such a person defending me.
Presumably if you are yourself a good lawyer, it shouldn’t be too difficult to do this. The law is fairly logical and rigorous.
Well, if his “reality distortion field” was powerful enough to also affect judges.
I think Spooner got it right:
-Lysander Spooner from “An Essay on the Trial by Jury”
There is legitimate law, but not once law is licensed, and the system has been recursively destroyed by sociopaths, as our current system of law has been. At such a point in time, perverse incentives and the punishment of virtue attracts sociopaths to the study and practice of law, and drives out all moral and decent empaths from its practice. If not driven out, it renders them ineffective defenders of the good, while enabling the prosecutors who hold the power of “voir dire” jury-stacking to be effective promoters of the bad.
The empathy-favoring nature of unanimous, proper (randomly-selected) juries trends toward punishment only in cases where 99.9% of society nearly-unanimously agree on the punishment, making punishment rare. …As it should be in enlightened civilizations.
They can vote against people who write or enforce unjust laws. There’s not much they can do about the judicial branch, but they only need to stop one branch.
That’s the US anyway. I don’t know the details about other countries.
If there’s that much corruption, as opposed to people simply not voting for what they claim to care about, I don’t think juries are going to be much help.
What exactly is meant by the phrase “BS discipline”? Is the claim that most scholarship in law is meaningless nonsense? Or is the claim that there is no societal value at all in law? Or is it something else?
I suppose a discipline is BS if in the case of a science, it fails to systematically track the realities of an object of study. In the case of a trade, like business management or welding, then it’s a BS discipline if it fails to make its practitioners more successful than those outside the discipline. I’m not sure what kind of a discipline law is.
Taleb’s thought, I suppose, is that a discipline is likely to be BS if, instead of directly measuring the capabilities of its practitioners, we tend to measure only indirectly. This only implies that direct measurement is costly enough to outweigh its benefits, however. One reason for its being so may be that there’s nothing to measure directly (i.e. the discipline is BS), but another might be that the discipline is so specialized that very few people are competent to judge any given applicant. Yet a third might be that its subject matter is subject to a lot of mind-killing, so that one can confidently judge an applicant without bias.
I agree that it’s difficult to tell how good a lawyer is, which leads to a lot of nonsense like firms spending a lot of money of impressive offices and spending hours and hours of time chasing down every last grammatical error before filing court papers.
This is true for a lot of professions. Most of them don’t have the problem you’re describing.
Would you mind giving me three examples? This would help me think about what you are saying. TIA.
Plumbers, auto-mechanics, doctors.
Thank you for answering. I would have to say that with plumbers and auto-mechanics, it is a lot easier to assess how good they are compared to lawyers since if they do their job properly, the problem they are working on will normally be solved and if they do not do their job properly, the problem will normally not be solved. Do you agree with this?
I agree that with doctors, there is a similar problem of difficulty in assessing quality as with lawyers. On the other hand, there are also problems with doctors spending energy on signalling, although perhaps not as bad as with lawyers. For example, caring about where a doctor went to medical school; prestigious internships; and spending money on impressive facilities. Do you agree with this?
And with a lawyer you can tell what the outcome of the trial was. Now obviously, the lawyers might overcharge you, but you also have the same problems with car mechanics and plumbers. Also for some cases what kind of outcome one can reasonably expect can depend on details of the case that may not be obvious to an non-lawyer, but you have the same problem with car mechanics.
Also, with all four of plumbers, auto-mechanics, doctors, and lawyers (especially contract lawyers) its possible for them to screw up in ways that aren’t immediately obvious but will cause problems down the line. (With lawyers one will at least be more obvious that the lawyer screwed up when the problem finally surfaces.)
If someone is found guilty in a trial, is that a sign of a poor lawyer, or is that a sign that he was, in actual fact, guilty as charged, independent of the ability of his legal team?
I mean it’s not like a good legal team has ever allowed a guilty man to get away with it. Also, presumably the person knows whether he is guilty.
A highly competent legal team may allow a guilty man to get away with a crime, yes. And an incompetent legal team may allow an innocent man to get convicted.
But a very competent legal team which normally takes cases where the defendant is guilty will do very badly by this metric; while an incompetent legal team might get a lot of innocent clients might do very well by the same metric.
If I wish to select a lawyer to defend me in a trial, then I know whether or not I am guilty of whatever I am being charged with. I do not know how many of the lawyer’s previous clients were guilty; nor how many were wrongfully convicted, or wrongfully released. Thus, a mere count of previous victories in court is potentially a poor measure of the lawyer’s effectiveness.
Yes, and the same problem can exist for plumbers, car mechanics, and doctors.
Academics also.
You have a point—a man who takes on only easy problems, in any field, will have a higher success rate than a man who takes on only hard problems, irrespective of actual skill level.
I think that what makes evaluating a lawyer in particular difficult is that it is very hard for a non-lawyer to easily distinguish easy from hard problems. For car mechanics, I know that replacing the oil is a much simpler job than replacing the engine; but when looking over a lawyer’s history, I can’t easily evaluate the relative difficulty of his previous successes.
On the other hand, if I come in complaining that the car is making funny noises, it’s a lot harder to see whether this is an easy or hard problem. Another example, I come in for a routine inspection and he tells me that some part I’ve never heard of needs replacing and it’s going to be expensive. I have no way to check short of going to a different mechanic and then some figuring out who to trust.
Even putting aside the fact that the vast majority of litigation is resolved before trial, there is also the fact that excellent lawyers lose cases all the time due to a lot of extraneous factors. By analogy, if the auto mechanic charged you $500 to change your brakes, and after he was done with the car the brakes still didn’t work, you could be pretty confident that you have a lousy auto mechanic.
Do you agree that in litigation there is a much more of a problem of extraneous factors making it difficult to assess the lawyer than extraneous factors in auto repair making it difficult to assess the mechanic?
Do you agree that with plumbers and auto mechanics it is a lot easier to assess how good they are compared to lawyers since if they do their job properly, the problem they are working on will normally be solved and if they do not do their job properly, the problem will normally not be solved?
Do you agree there are also problems with doctors spending energy on signalling, (although perhaps not as bad as with lawyers), for example, caring about where a doctor went to medical school; prestigious internships; and spending money on impressive facilities?
These are real questions, not rhetorical questions; they are aimed to get a better grip on where we agree. Please actually answer them as opposed to just answering the argument you imagine is behind them.
What if the brakes now work, but not necessarily quite as well as they did before? If an auto mechanic tells you your car is totaled, how do you know he’s correct?
That depends on the details of the problem. In a sense the same is true for lawyers. I agree that there are quantitative differences about exactly how likely you are to get a good estimate with what amount of certainty between these examples but I don’t think it’s large enough to make a qualitative difference in the analysis.
Those are interesting questions, but unfortunately you have basically ignored two of the three questions I asked you. As mentioned above,these were real questions aimed at getting a better grip on where we agree. It’s difficult enough to discuss these kinds of things without having the other person dance around the issues. I don’t engage with people who do this . . . .goodbye.
I figured the answers to those were easy to extrapolate from what I wrote, in any case here they are.
I agree that this is more of a problem for lawyers, although I’m not sure how much more.
It is, but I’ve never heard anyone say that there is no point going to anything besides the top tier medical schools.
How about manifacturers of multivitamins?
(BTW, the term for this sort of things is credence goods.)
Clifford Truesdell
This is beautiful: I can’t turn it into equations. Does that refute it or support it?
Did you try? Each sentence in the quote could easily be expressed in some formal system like predicate calculus or something.
There are symbol-juxtapositions which are syntactically or semantically disconnected from any model set in ZFC. There are no sets in ZFC which are similarly separated from statements in a suitable language.
This looks like the sort of thing that I usually find enlightening, but I don’t understand it. Could you repeat it in baby-speak?
You can write nonsense formulas on paper which don’t correspond to theorems about anything. You can’t construct nonsense universes which aren’t described by theorems anywhere.
Words only mean anything because we interpret them to correspond to the real world. In the absence of words, the real world continues existing.
I don’t see why an equation can’t be nonsensical. Perhaps the nonsense is easier to spot when expressed in symbols, or then again perhaps not.
Equations can be nonsensical, but it’s harder to write a nonsense equation than a nonsense sentence (like the old joke: it’s easy to lie with statistics, but it’s easier to lie without them). In a way this was the unpleasant surprise of Godel’s incompleteness theorem; before that we’d hoped that every well-formed proposition was true or false and could be proven to be so.
Raising Steam, Terry Pratchett
Regarding the first steam engine in Pratchett’s fictional world.
Relevant is the Amtal Rule on this same page: http://lesswrong.com/r/lesswrong/lw/jzn/rationality_quotes_april_2014/as28
This quote made me read the book, and I wasn’t disappointed.
The overall arc of the Discworld is stunning; in retrospect, it recapitulates the rise of civilization well. Raising Steam is perhaps not the last book, but it wouldn’t be a bad place to stop if it were.
Terry Coxon
All I’m getting out of this is that the quoted fails to understand the ability of great minds. Is there a context I’m missing?
Being ready for failure is not quite the same thing as considering success impossible.
The context is that economics is in shall we say an earlier stage of development than engineering, so we should be more conscious of the risk of economic tinkering failing than we need be of whether our bridge or plane falls apart underneath us.
I assume that the reader is familiar with the idea of extrasensory perception, and the meaning of the four items of it, viz., telepathy, clairvoyance, precognition and psychokinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming.
Alan Turing (from “Computing Machinery and Intelligence”)
Particularly relevant a quote given Yvain’s recent http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/
That is an exceedingly interesting article. Thanks for the link.
Can you provide some context? I don’t understand: the claim that the evidence for telepathy is very strong is surely wrong, so is this sarcasm? A wordplay?
Turing’s 1950 paper asks, “Can machines think?”
After introducing the Turing Test as a possible way to answer the question (in, he expects, the positive), he presents nine possible objections, and explains why he thinks each either doesn’t apply or can be worked around. These objections deal with such topics as souls, Gödel’s theorem, consciousness, and so on. Psychic powers are the last of these possible objections: if an interrogator can read the mind of a human, they can identify a human; if they can psychokinetically control the output of a computer, they can manipulate it.
From the context, it does seem that Turing gives some credence to the existence of psychic powers. This doesn’t seem all that surprising for a British government mathematician in 1950. This was the era after the Rhines’ apparently positive telepathy research — and well before major organized debunking of parapsychology as a pseudoscience (which started in the ’70s with Randi and CSICOP). Governments including the US, UK, and USSR were putting actual money into ESP research.
Yes, but also remember that Turing’s English, shy, and from King’s College, home of a certain archness and dry wit. I think he’s taking the piss, but the very ambiguity of it was why it appealed as a rationality quote. He’s facing the evidence squarely, declaring his biases, taking the objection seriously, and yet there’s still a profound feeling that he’s defying the data. Or maybe not. Maybe I just read it that way because I don’t buy telepathy.
Hodges claims that Turing at least had some interest in telepathy and prophesies:
Alan Turing: The Enigma (Chapter 7)
I think Turing’s willingness to take all comers seriously is something to emulate.
Hans Hahn, Otto Neurath and Rudolf Carnap, The Scientific Conception of the World: The Vienna Circle, 1929.
Cool! I’ve looked for that manifesto on line before, and failed to find it; thanks for the link! Too many people seem to get all of their knowledge of the Vienna Circle and Logical Positivism from its critics. It’s good to look at the primary sources. The translation is a little clunky (perhaps too literal), but so much better than not having it available at all.
I agree.
The Logical Positivists were, to my mind, the greatest philosophers ever, and it’s a shame they have been the target of so much unfair criticism. Of course they were wrong on many issues, but their attitude towards philosophy, knowledge and political action is unsurpassed. If we can revive their spirit again, philosophy will have a bright future.
What the logical positivist position on political action? Are you talking about things like getting evolution out of science classes, or are you talking about something else?
I’m talking primarily of their resistance to nazism, and how they saw intellectual and political strugges as inextricably intertwined. In this they were very similar to the French revolutionaries. See for instance this article where Carnap criticizes the nazi philosopher Heidegger in his usual meticulous and over-dry manner. Amazing that he managed to keep so cool in the face of such evil stupidity.
After the war, the US and Britain became the heart of analytic philosophy, and much of the seriousness of the Vienna Circle (and also Popper) disappeared. What replaced it was a rather frivolous, smart aleck kind of philosophy personified especially by people like Lewis and Kripke, but to some degree also Quine, Davidson, Austin and others.
In his excellent The Decline of the German Mandarins Fritz Ringer shows that the German academia grew increasingly dominated by mad romantic reactionaries from 1890 to 1933 (where the book ends). It seems to me (and I think, but am not sure, that Ringer touches upon this at some point) that this, however, spurred real thinkers, in the enlightenment tradition, to greater heights than they otherwise would have reached. They were forced to focus on the big questions, to come up with fundamental reasons for why you should adopt the rationalist perspective, because, unlike in the Anglo-Saxon world, this perspective had a terrifying opponent in the form of romantic reaction. Ringer mostly focuses on the great sociologist Max Weber and others like him, but I think that a similar can be told about the Vienna Circle (I don’t recall whether he comments on them).
The Logical Positivists were mostly pretty far left, but they mostly didn’t engage in much political advocacy; though this was controversial among members of the movement (Neurath thought they should be more overtly political), most of them seemed to think that helping people think more clearly and make better use of science was a better way to encourage superior outcomes than advocating specific policies. They were also involved in various causes, though; many members of the Vienna Circle were involved in adult education efforts in Vienna, for example. The more I think about it, the more I think it’s pretty accurate to say they had a lot in common with the Less Wrong crowd in their approach to politics (though they were almost certainly further left, even taking into account that the surveys suggest Less Wrong itself is further left than many people seem to realize).
This quote by Anthony de Jasay echoes the Logical Empiricist stance on political action.
Nassim Taleb
Is this being quoted as an example of a rhetorical figure ? I can’t even see what it’s supposed to mean, let alone work out whether it’s true.
Source: http://www.prequeladventure.com/2014/05/3391/
thank you for posting this—now I have something new to read!
Eight Ways to Build Collaborative Teams by Lynda Gratton and Tamara J. Erickson
This seems applicable as the LessWrong community is “large, virtual, diverse, and composed of highly educated specialists” and the community wants to solve challenging projects.
Is the paper worth reading in that it offers solutions to this problem?
These are the key points from page 7:
I have seen failure at this to lead to a decline in participation esp. by key contributors who didn’t see their effort honored or supported.
For LW this might mean key contributors supporting the creation or operation of benefits like the new business networking and user page initiaitive or in general the operation of the site.
On LW the active members already act as role models.
I can only guess that that is what CFAR does.
Building real-life relationships is done by meetups. I see the meetup resources as an effort to support this. But maybe someone could actively contact the meetup organizers and look whether there is potential for improvement.
I felt this at the Berlin event.
I can’t quickly evaulate this. Ideas?
This follows from LW being a community and no business.
There was a post and discussion on roles but I can’t find it. Maybe this needs more structure.
Edited OP to make it clear that you can provide a link to the place you found the quote, rather than needing to track down an authoritative original source.
Scott Adams on consciously controlling your own moods and feelings
It has come to be accepted practice in introducing new physical quantities that they shall be regarded as defined by the series of measuring operations and calculations of which they are the result. Those who associate with the result a mental picture of some entity disporting itself in a metaphysical realm of existence do so at their own risk; physics can accept no responsibility for this embellishment.
Sir Arthur Eddington, 1939, The Philosophy of Physical Science
Nassim Taleb
flying vs aeroplanes?
Examples?
Scholastic theology comes right to mind.
Scholastic theology was complex, but so are many theories more highly regarded than it nowadays. Like physics.
Or consider the modern economy—when I contemplate a wireless mouse, it seems vastly more complex than the problem it solves (a mouse which requires a wire), yet, the wireless mouse still works & is nice to have.
In fact, you could probably consider every single technology and economic system a counterexample to this Taleb quote—it’s all epicycles upon epicycles, techniques to solve problems which you only have because of earlier techniques you use to solve other problems, in a dizzying spiral of accumulating complexity (think of Kelly’s What Technology Wants here) - yet are we really worse off than hunter-gatherers?
In strict hedonic terms, I should say yes.
Um, the entire point of the Occam’s Razor/Kolmogorov complexity approach is physics isn’t complicated.
And do you think Taleb is speaking in a specific, precise, highly-technical, highly unusual definition of ‘complex’ rarely found outside of computer science circles and extremely niche interest groups like LessWrong? Or do you think he is speaking in the usual colloquial sense of ‘complex’ which everyone understands and under which physics is indeed extremely complex and difficult?
Chaos theory is a part of physics that deals with complexity that Taleb would probably call complex. On the other hand I don’t think that Taleb would call classical Newtonian physics complex.
I don’t think that Taleb would be a fan of wireless mouses. A wireless mouse has the failure mode of the battery dying. A wired mouse doesn’t have that problem because it’s less complex. Look at his website. Despite various people offering to him to do the necessary work, he doesn’t use a content management system like Wordpress.
When it comes to software design Taleb probably favors the 37 signals philosophy. It tries to solve problems in a way that’s as simple as possible instead of being complex.
Take tax law as another example. Politicians want to encourage certain behavior so they write an exception into the tax law that people who engage into that behavior have to pay less taxes. It adds complexity to the system even if you get some people to shift their behavior in the right direction. The result is that developed countries have very complex tax laws that nobody really understands.
Taleb would recommend to make the tax law simpler by not trying to form the law in a way that fixes every single exception and encourages people to engage in specific actions. Whenever congress passes a tax law the tax code shouldn’t grow in size but shrink.
Changes in tax law should not increase it’s Kolmogorov complexity but reduce it. Washington politicians don’t understand it. Even economics professors like our Hanson don’t.
Low Kolmogorov complexity tax law would also be easier to understand in the more colloquial sense of ‘complex’.
These two statements are contradictory. Lots of Newtonian physics problems are chaotic- in fact Poincarre developed the early ideas that became chaos theory to deal with the three body problem.
Would he?
More the worse for him. The markets have spoken and they like their wireless mice. Wirelessness, for all the added complexity—complexity which is much more complex than the problem it solves—seems to be carrying its weight, contra the Taleb quote.
I doubt that. Here is where using a complex and not widely-understood theory can bite you: the shorter the code, the more computation you expect it to use to yield the specified results. The shortest possible code for anything reasonably complex will take huge amounts of computing power. (How much computation to simulate our universe up to the present moment using the minimal encoding of the Standard Model + initial conditions?) Any shorter version of the current tax code which has the same meaning will be much harder to understand than a version which redundantly spells out details and common cases for you. And if you change the actual meaning of the tax code, well, then you run into the public choice issues which made the meaning what it is now...
(It would be better to talk about logical depth and sophistication than simple uncomputable Kolmogorov complexity, but I believe even less that that is what Taleb meant.)
Why should they? Do we expect physicists to understand each and every area of physics? Specialization is one of the defining principles of modern societies.
Whilst arguing that uncertainty is best measured using numbers and probabilities:
Dennis V. Lindley, Understanding Uncertainty
[missing the point]
On the contrary, combining adverbs is easy. If X is very uncertain, and Y is very uncertain, then X—Y is very, very uncertain. [/missing the point]
^_^
Why isn’t it “very, very uncertain, uncertain”? Anyway, ‘very’ is an adjective. ‘Verily’ is the adverb.
But without the math to prove it, you may wrongly conclude that the uncertainties cancel out and X—Y is quite certain indeed.
--Penn Jillette in “Penn Jillette Is Willing to Be a Guest on Adolf Hitler’s Talk Show, Vanity Fair, June 17, 2010
This quote seems like it’s lumping every process for arriving at beliefs besides reason into one. “If you don’t follow the process I understand and is guaranteed not to produce beliefs like that, then I can’t guarantee you won’t produce beliefs like that!” But there are many such processes besides reason, that could be going on in their “hearts” to produce their beliefs. Because they are all opaque and non-negotiable and not this particular one you trust not to make people murder Sharon Tate, does not mean that they all have the same probability of producing plane-flying-into-building beliefs.
Consider the following made-up quote: “when you say you believe something is acceptable for some reason other than the Bible said so, you have completely justified Stalin’s planned famines. You have justified Pol Pot. If it’s acceptable for for you, why isn’t it acceptable for them? Why are you different? If you say ‘I believe that gays should not be stoned to death and the Bible doesn’t support me but I believe it in my heart’, then it’s perfectly okay to believe in your heart that dissidents should be sent to be worked to death in Siberia. It’s perfectly okay to believe because your secular morality says so that all the intellectuals in your country need to be killed.”
I would respond to it: “Stop lumping all moralities into two classes, your morality, and all others. One of these lumps has lots of variation in it, and sub-lumps which need to be distinguished, because most of them do not actually condone gulags”
And likewise I respond to Penn Jilette’s quote: “Stop lumping all epistemologies into two classes, yours, and the one where people draw beliefs from their ‘hearts’. One of these lumps has lots of variation in it, and sub-lumps which need to be distinguished, because most of them do not actually result in beliefs that drive them to fly planes into buildings.”
The wishful-thinking new-age “all powerful force of love” faith epistemology is actually pretty safe in terms of not driving people to violence who wouldn’t already be inclined to it. That belief wouldn’t make them feel good. Though of course, faith plus ancient texts which condone violence can be more dangerous, though as we know empirically, for some reason, people driven to violence by their religions are rare these days, even coming from religions like that.
I don’t think it’s lumping everything together. It’s criticizing the rule “Act on what you feel in your heart.” That applies to a lot of people’s beliefs, but it certainly isn’t the epistemology of everyone who doesn’t agree with Penn Jillette.
The problem with “Act on what you feel in your heart” is that it’s too generalizable. It proves too much, because of course someone else might feel something different and some of those things might be horrible. But if my epistemology is an appeal to an external source (which I guess in this context would be a religious book but I’m going to use “believe whatever Rameses II believed” because I think that’s funnier), then that doesn’t necessarily have the same problem.
You can criticize my choice of Rameses II, and you probably should. But now my epistemology is based on an external source and not just my feelings. Unless you reduce me to saying I trust Rameses because I Just Feel that he’s trustworthy, this epistemology does not have the same problem as the one criticized in the quote.
All this to say, Jillette is not unfairly lumping things together and there exist types of morality/epistemology that can be wrong without having this argument apply.
‘Act on an external standard’ is just as generalizable—because you can choose just about anything as your standard. You might choose to consistently act like Gandhi, or like Hitler, or like Zeus, or like a certain book suggests, or like my cat Peter who enjoys killing things and scratching cardboard boxes. If the only thing I know about you is that you consistently behave like someone else, but I don’t know like whom, then I can’t actually predict your behavior at all.
The more important question is: if you act on what you feel in your heart, what determines or changes what is in your heart? And if you act on an external standard, what makes you choose or change your standard?
It looks like there’s all this undefined behavior, and demons coming out the nose from the outside because you aren’t looking at the exact details of what’s going on in with their feelings that are choosing the beliefs. Though a C compiler given an undefined construct may cause your program to crash, it will never literally cause demons to come out of your nose, and you could figure this out if you looked at the implementation of the compiler. It’s still deterministic.
As an atheistic meta-ethical ant-realist, my utility function is basically whatever I want it to be. It’s entirely internal. From the outside, from someone who has a system where they follow something external and clearly specified, they could shout “Nasal demons!”, but demons will never come out my nose, and my internal, ever so frighteningly non-negotiable desires are never going to include planned famines. It has reliable internal structure.
The mistake is looking at a particular kind of specification that defines all the behavior, and then looking at a system not covered by that specification, but which is controlled by another specification you haven’t bothered to understand, and saying “Who can possibly say what that system will do?”
Some processors (even x86) have instructions (such as bit rotate) which are useful for significant performance boosts in stuff like cryptography, and yet aren’t accessible from C or C++, and to use it you have to perform hacks like writing the machine code out as bytes, casting its address to a function pointer and calling it. That’s undefined behavior with respect to the C/C++ standard. But it’s perfectly predictable if you know what platform you’re on.
Other people who aren’t meta-ethical anti-realists’ utility functions are not really negotiable either. You can’t really give them a valid argument that will convince them not to do something evil if they happen to be psychopaths. They just have internal desires and things they care about, and they care a lot more about having a morality which sounds logical when argued for than I do.
And if you actually examine what’s going on with the feelings of people with feeling-driven epistemology that makes them believe things, instead of just shouting “Nasal demons! Unspecified behavior! Infinitely beyond the reach of understanding!” you will see that the non-psychopathic ones have mostly-deterministic internal structure to their feelings that prevents them from believing that they should murder Sharon Tate. And psychopaths won’t be made ethical by reasoning with them anyway. I don’t believe the 9/11 hijackers were psychopaths, but that’s the holy book problem I mentioned, and a rare case.
In most cases of undefined C constructs, there isn’t another carefully-tuned structure that’s doing the job of the C standard in making the behavior something you want, so you crash. And faith-epistemology does behave like this (crashing, rather than running hacky cryptographic code that uses the rotate instructions) when it comes to generating beliefs that don’t have obvious consequences to the user. So it would have been a fair criticism to say “You believe something because you believe it in your heart, and you’ve justified not signing your children up for cryonics because you believe in an afterlife,” because (A) they actually do that, (B) it’s a result of them having an epistemology which doesn’t track the truth.
Disclaimer: I’m not signed up for cryonics, though if I had kids, they would be.
I very much doubt that. At least with present technology you cannot self-modify to prefer dead babies over live ones; and there’s presumably no technological advance that can make you want to.
If utility functions are those constructed by the VNM theorem, your utility function is your wants; it is not something you can have wants about. There is nothing in the machinery of the theorem that allows for a utility function to talk about itself, to have wants about wants. Utility functions and the lotteries that they evaluate belong to different worlds.
Are there theorems about the existence and construction of self-inspecting utility functions?
That means you can actually make people less harmful if you tell them to listen to their hearts instead of listening to ancient texts. The person who’s completely in their head and analyses the ancient text for absolute guidance of action is dangerous.
A lot of religions also have tricks were the believer has to go through painful exercises. Just look at a Christian sect like Opus Dei with cilices. The kind of religious believer who wears a cilice loses touch with his heart. Getting someone who’s in the habit of causing his own body pain with a cilice to harm other people is easier.
I’d have to disagree here; I think that “faith” is a useful reference class that pretty effectively cleaves reality at the joints, which does in fact lump together the epistemologies Penn Jilette is objecting to.
The fact that some communities of people who have norms which promote taking beliefs on faith do not tend to engage in acts of violence, while some such communities do, does not mean that their epistemologies are particularly distinct. Their specific beliefs might be different, but one group will not have much basis to criticize the grounds of others’ beliefs.
The flaw he’s arguing here is not “faith-based reasoning sometimes drives people to commit acts of violence,” but “faith-based reasoning is unreliable enough that it can justify anything, in practice as well as principle, including acts of extreme violence.”
People who follow the moral code of the Bible versus peopel that don’t is also a pretty clear criteria that separates some epistemologies from others.
People who uses a pendulum to make decisions as a very different epistemology than someone who thinks about what the authorities in his particular church want him to do and acts accordingly.
The kind of people who win the world debating championship also haave no problem justying policies like genocide with rational arguments that win competive intellectual debates.
Justifying actions is something different than decision criteria.
Yes, but then you can go a step down from there, and ask “why do you believe in the contents of the bible?” For some individuals, this will actually be a question of evidence; they are prepared to reason about the evidence for and against the truth of the biblical narrative, and reject it given an adequate balance of evidence. They’re generally more biased on the question than they realize, but they are at least convinced that they must have adequate evidence to justify their belief in the biblical narrative.
I have argued people out of their religious belief before (and not just Christianity,) but never someone who thought that it was correct to take factual beliefs that feel right “on faith” without first convincing them that this is incorrect as a general rule, not simply in the specific case of religion. This is an epistemic underpinning which unites people from different religions, whatever tenets or holy books they might ascribe to. I’ve also argued the same point with people who were not religious; it’s not simply a quality of any particular religion, it’s one of the most common memetic defenses in the human arsenal.
-- Rational!Quirrel, HPMoR chapter 20
In other words: how else can you justify a moral belief and consequent actions, except by saying that you really truly believe in your heart that you’re Right?
We should not confuse between the fact that almost all people other than Manson think he was morally wrong, and the fact that his justification for his action seems to me to be of the same kind as the justifications anyone else ever gives for their moral beliefs and actions.
Unlike Quirrell, Penn Jillette is not referring to “knowing in your heart” that your moral values are correct, but to “knowing in your heart” some matters of fact (which may then serve as a justification for having some moral values, or directly for some action).
In what way is “deserve” a matter of fact?
“Deserving” is a moral theorem, not a moral axiom. You can most definitely test and check whether someone deserves something, by asking about the rules of the game and their position within the game.
If there is no game at hand, I would say “deserving” becomes nonsense, but that’s just me.
If you’re a moral realist, and you think moral opinions are statements of fact (which may be right or wrong), then you think it’s possible to “know in your heart” moral “facts”.
If you’re a moral anti-realist (like me), and you think moral opinions are statements of preferences (in other words, statements of fact about your own preferences and your own brain-wiring), then all moral opinions are such. And then surely Manson’s statement of his preferences has the same status as anyone else’s, and the only difference is that most people disagree with Manson.
What else is there?
However, it’s true that Jillette talks about factual amoral beliefs like fairies and gods. So my comment was somewhat misdirected. I still think it’s partly relevant, because people who believe in gods (i.e. most people) usually tie them closely to their moral opinions. It’s impossible to discuss morals (of most humans) without discussing religious beliefs.
That leaves the question of how Penn actually knows that Chalie Manson was acting based on what his heart was telling him.
Psychopaths are frequently bad at empathy or “listening to their hearts”. It might even be the defining characteristic of what makes someone a psychopath.
You missed the point entirely. ‘Listening to their (own) hearts’ is not empathy, it’s just giving credibility to your instinctive beliefs, regardless of wether they have a basis or not. How is believing that everyone is connected by a network of magical energy tethers and acting according to that any different than believing that my soul will be saved if I massacre 40 people and acting on that?
The only difference is the actual acts that you take due to the beliefs. Mind you, it’s a very important difference, but the quote is not talking about that, it’s talking about beliefs themselves and using them as a sufficient justification for acts.
I think that plenty of people who call themselves rationalists simply have no idea what listening to one’s own heart actually means.
It’s like talking with a blind man who has no concept about how green differs from red about how one using a traffic light, to decide when to stop your car. You mean at on time one lamp shown you that you have to stop and at another time it tells you to go ahead? How do you tell the difference?
You basically left out the part about listening to your heart. Having a cognitive belief and making decisions based on mental analysis of the consequences of the belief is not what listening to one’s heart is about.
If a human tries to murder another, certain automatic programming fires that dissuades the human from killing. Emotions come up. If you listen to them, you won’t kill. You actually have to refuse to listen to your heart to be capable of killing. Maybe there are a few Buddhists who manage to be in a complete state of pure heartfelt love while they ram a knife into someone’s else heart but that’s very far from what 99.99% of the population is capable of.
In the military soldiers get trained to disassociate the emotions that prevent them from killing others. Psychopaths usually do have a bunch of beliefs about morals. What they lack is the ability to listen to their hearts in a way that guides their actions.
The philosophers of ethics steal more books than other philosophers. It’s not clear that well thought out moral beliefs are useful for preventing people from engaging in immoral actions.
No. Whether or someone is in their head or listens to their heart can matter to the people around him, if those people are perceptive enough to tell the difference. It probably effects most people on an unconscious level.
Listening to your heart just means listening to your innermost desires. It has nothing to do with empathy. Meaning that psychopaths listen to their heart just as much as anyone else. I’ve never heard anyone use the idiom “listen to your heart” to mean to practice empathy.
Sexual lust would be a desire that not felt in the heart but elsewhere.
The heart is a specific place in the body. Recently a woman in my meditation group that that she got a perception for the part of her body behind her heart and that part gives different answer and she now experiments with following those answers.
That a very high level of self perception that most people who speak about listening to their heart don’t have. Most people are a bit more vague about what part in their body they are listening to.
There a reason why people lay their hand on the heart when making an oath and they don’t have it on their heads or their belly. It does something on a physiological level.
People rather use phrases like having a heartfelt connection or connecting with someone’s heart. To do that you usually need a connection to your own heart.
You’re taking this English idiom too literally. It reminds me of when I mentioned “killing two birds with one stone” to my Italian born girlfriend and she was horrified. I had to explain to her that one is not literally killing two birds with one stone; your continued literalism of this particular turn of phrase would be like her continuing to insist that I’m using a metaphor in my own native language wrong since I’m not using stones nor are any birds around.
A good portion of the New Age crowd takes the idiom literally. Listening to their heart is something different than listening to their gut. Different place in the body. Different qualia.
Penn Jillette’s problem is that he take something that’s meant literally pretend that it means something different. It’s like talking to the blind man who thinks that the red and green that you are metaphars for apples and trees.
I grant that there are people who just talk the talk and don’t walk the walk who don’t means it literally. People who read to much books. But it’s a strawman to assume that all people are like this.
Why should they have any such perception? The literal heart doesn’t provide any answers whatsoever, the “heart” answers are generated in the brain as much as any of the other ones.
There are plenty of neurons outside the brain, so I don’t know whether that’s true. Regardles, the motor cortex has somewhere a representation of the hard that”s “in the brain”. Given that panthom limbs can hurt it’s probably somewhere in the motor cortex with feedback channels to the actually body location.
That’s a complicated question.
I would preface it by saying that language is evolutionary a recent invention. We are not evolved for that purpose. It’s a byproduct. An accident more than a planned thing. A dog doesn’t need to have a verbalized understanding of a situation to decide whether to do A or B.
It devels into the nature of what emotions are. In academia you have plenty of people who are in a practical sense blind when it comes to perceiving what happens in their body. People who declared blindness as virtue.
If a man get’s an erection and his attention goes to that part of his body, it’s evolutionary useful for the men to do things lead to having sex.
If the same man has an empty stomach and the attention goes to perceiving the feeling of an empty stomach, that in turn leads to different actions.
Somewhere along those lines it made “sense” for evolution to develop a system of emotions where emotions are “located” somewhere in the body. Reuse of already existing neural patterns might also play a huge role. Evolution frequently works by reusing parts that already exist and were build for other purposes.
Years ago in an effort to understand the brain I brought a book called Introducing the Mind and Brain by Angus Gallatly who’s a professsor of Cogntive Psychology.
At the beginning when he recaps the history of the mind he writes:
At the time I first read those words, I also agreed with the strangeness of the idea. Now years later I’m touch with my body well enough to completely understand why it makes sense to speak that way. I’m not anymore blind. Even on a bad day I can tell apart midriff/stomach, heart and head. I also know people with better kinesthetic perception than myself.
When it comes to return hard questions, why do you think that human have beliefs? The concept doesn’t seem straightforward enough that it was around in Homers days. Do you think dogs have them? Doves? Ants? Caenorhabditis elegans?
Bonus question, when do you think that humans started “believing” in beliefs?
Re: Homer’s vocabulary not including mental terms: this is one of the things that Julian Jaynes points to as evidence of his “bicameral mind”. Do you happen to know whether the book you read has any connection to Jaynes’ work?
The book that I read is mostly an introduction into neuroscience that says a bunch of things everyone is supposed to know and illustrates it with pretty pictures. It begins like a lot of textbooks with talking about the history of the subject. It’s not the kind of book who tries to say something new.
Julian Jaynes isn’t referenced. But the book is from a given that Jayne is widely read I think there a good chance that a Cognitive Psychology professor like Gellatly read him.
In general reading on Wikipedia that Jaynes influenced Daniel Dennett is funny when Dannett says things like that consciousness doesn’t exist or is a lie that the brain tells itself. The thing that Jaynes calls consciousness might be called ‘ego’ by a Buddhist who wants to transcend it to reach a state of higher consciousness.
I would say that this is probably a result of different emotions being associated with certain physiological responses. The body reacts to what’s going on in the brain, and the brain gets further feedback from that.
I recognize the responses from various parts of my body when I think, but that doesn’t mean that other parts of my body are doing the thinking for me, or that imagining they are would result in my making better decisions.
Could you make clearer what you mean by beliefs, or what it means to “believe” in beliefs? As-is, the questions seem too vague to adequately answer.
In Homer’s time there was no concept of beliefs. In this discussion there the notion that people who listen to their hearts somehow develop the wrong beliefs and that’s bad.
So whatever Penn Jillette means when he says “believe”. In case you think that’s no coherent concept, that would also be an answer that I would accept.
I’m not arguing better or worse. I’m arguing different. People who listen to their hearts don’t go on killing sprees. They won’t push fat men of bridges. If you think that not enough fat men are pushed of bridges than you might argue against “listening to your heart” but there a very different discussion.
If I’m having this discussion on LW I’m mostly in my head. That’s completely appropriate. If I would be mainly in my head while dancing Salsa, that would lead to a lot of bad decisions during Salsa dancing. Beyond bad decisions, if the girl with whom I’m dancing is perceptive it will feel inapproriate for her.
I’d like to point out that this is not an established fact. This is a theory which has been debated and I don’t think made it to the mainstream status. It is also my impression that the Odyssey is somewhat different from the Iliad in that regard.
The book from which I took it is a mainstream introduction to cognitive science written by a professor of cognitive psychology that published papers. I read it because someone at in my bioinformatics university course recommended it to me as an introduction. What do you mean with “mainstream status” is that doesn’t count as mainstream?
By mainstream status I mean “generally accepted in the field as true”. Lots of professors publish lots of books with claims that are not generally accepted as true. Sometimes this not is “not yet”, sometimes it is “not and never will be because they are wrong”, and sometimes it is “maybe, but the probability looks low and there are better approaches”.
First I haven’t investigated the issue beyond this one book. If you know of a good source arguing the opposite, I’m happy to look up your reference.
Secondly, I don’t think that’s useful to equate mainstream belief, with consensus belief. I think it’s quite useful to have a term for ideas found in mainstream science textbooks compared to ideas that you don’t find in mainstream science textbooks.
Science by it’s nature isn’t certain and science textbooks can contain claims that aren’t true. If I’m discussing a topic like this I think it’s useful to be clear about which ideas from me come from a mainstream science source and which come from other sources such as personal experience or a NLP seminar.
For the purposes of the point that I made it’s also not important whether Homer in particular had a concept of beliefs or whether I find some African tribe who doesn’t have a word for it. The point is to go back and question core assumptions and getting more clear about the mental concepts that one uses because one doesn’t take them for granted.
Don’t model human cognition in form of beliefs just because your parents told you that humans make decisions according to beliefs. I think that’s a core part of the rationalist project.
At a LW meetup I made a session about emotions and asked at the start what everyone thought that the word meant. Roughly a third said A, a third said B and the last third had no opinion.
If you are not clear what you mean when you say “believe” and make complex arguments that build on the term, you are going to make mistakes and not see them because your terms are muddy and you are making a bunch of assumptions about which you never thought explicitely.
Yes, and that professor is a professor of cognitive psychology, not history.
If we’re talking about Penn Jillete’s conception of “beliefs”, then I would say that he probably has in mind pieces of information that our minds can represent and reason about abstractly, although this is of course somewhat speculative as I cannot speak for Penn Jillette. I would say that this probably doesn’t apply to the other species you named, but may apply to some other existing species, and probably some of our ancestors in the Homo genus.
I would regard this as a highly extraordinary claim demanding commensurately extraordinary evidence, and I would caution that this is a case which seems very prone to inviting the No True Scotsman fallacy. First off, how would you determine whether an individual listens to their heart or not, and second, how do you know that individuals who listen to their hearts don’t engage in such antisocial behaviors?
There are people who listen to their heads who go on killing sprees. I believe Christian’s claims is that listening to one’s heart is either uncorrelated or negatively correlated with going on killing sprees.
I don’t believe this is the case; I think the continuation of this discussion in other comments has made it pretty clear that he’s arguing that, while listening to their hearts, people do not go on killing sprees at all.
At the moment by observing and checking whether specific qualia are there. If I really wanted to make the proof in numbers, that would require that I systematically calibrate my own perception first and determine sensitivity and specificity of my perception of other people.
I’m also still a person who’s fairly intellectual. There are people with better perception than myself and getting them to do the assessing might be better.
Having a way to get that data via a more automated process that doesn’t need a perceptive human would also be nice. At the moment I however have no clear idea about how to go about measuring or the necessary financial resources to finance that kind of research.
A mix of more theoretical thinking and practical observation of the behavior of people with whom I’m interacting changes when the qualia I’m perceiving suggests that the locus of their attention within their body changes.
I understand that’s an advanced claim. At the moment I’m more concerned with making clear what the claim is than proving it.
If I say that Harry is not going to kill people if he listens to Hufflepuff but might kill if he listens to Slytherin, would that be a strange claim for you? If I say people who always listen to Hufflepuff don’t go on killing sprees would that seem strange to you? Most people you know don’t have the ability to mentally commit to 100% listen to Hufflepuff in every decision that they make in their lifes.
If I remember right Eliezer uses those different persona because it’s popular in systematic therapy to do so and someone he knows taught him that thinking that way can be useful. Those persona have a different quality than organs that can be perceived kinesthetically but they are not that different.
Lastly it’s useful to keep in mind what extraordinary claim needing extraordinary evidence can lead to. If you take it too far it shuts down people from saying what they honestly believe and instead let’s them argue beliefs that they don’t fully stand behind.
We all have many beliefs that come out of personal experience and not from reading papers. There are areas where the personal experiences differs massively. In those cases we don’t get certainity about what’s true when someone else tells us about how he thinks the world works. Simply understanding the models of other people is still be useful because then you might use that model sometime in the future when it explains something you see better than your other mental models.
No and yes respectively.
Hufflepuff isn’t a natural category, Harry!Hufflepuff is an abstraction based on Harry filtering his personality through certain criteria and impulses, such as what he conceives of as loyalty and compassion. Do I think that Harry, reasoning through his conception of loyalty and compassion, would go on a killing spree? Unlikely. Do I think that there are people who, reasoning through their conceptions of loyalty and compassion, would go on killing sprees? Absolutely.
A neurological fact that may be of some relevance here. Oxytocin, the chemical associated with triggering feelings of love and affection, has also been found to trigger increases in xenophobia and ingroup/outgroup bias.
Feelings of love and loyalty are not anathema to hate and violence. Rather, they often go hand in hand; the same feelings that unite you with a group can also be those which make you feel you’re united against something else.
How so? I don’t take any issue with your stating your beliefs and arguing in their favor. As is, I think that they’re misguided, but that’s because I think the weight of evidence is not in their favor. If you convinced me it were, I would change my mind. I think it would be far more useful for you to defend your belief with the best evidence you think favors it than to simply assert your belief.
Depends on what you mean with natural. Persona’s like that are probably as natural as beliefs are. Both aren’t hard coded but develop over time. I would guess for most people on LW persona’s like that aren’t in their conscious awareness. That doesn’t mean they don’t influence decision making.
Especially the persona’s that represents the parents often has a strong effect on people decisions in life.
Hate is usually felt in the midriff/gut/belly/stomach area and not where the heart is.
People also don’t just go at random on killing sprees. It’s the result of a longer process. Men often fail at approaching a hot woman because they have emotions that block them from doing so. It’s necessary to process emotions first, before being able to approach a hot woman.
Simply being attracted to the woman isn’t enough to overrule those other process that prevent that behavior. If it comes to behavior like killing another person, I would assume that the emotional barrier are even stronger.
To seek an example about WWII:
That suggest that you need a lot more than a bit oxytocin for increased ingroup/outgroup bias. I don’t think that the US army failed at teaching it’s solider the belief that shooting at the enemy makes sense.
The modern solution to how you get soldier to fire at the enemy is to do desensitation training.
I think I observed in the last year two persons who would qualify clinically as psychopathic. Both appeared to me very absent from their own bodies (description of a qualia that I have). Magnitudes more than the other people with whom I interacted in the year.
Let’s say someone get’s dumped by his girlfriend. His heart hurts very much. Enough that he rather doesn’t listen to it to reduce the pain. He blocks out the feeling by disassociating it. The person also feel very angry in is midrif and wants to act out that anger. That person might kill his girlfriend in revenge. There are probably a bunch of other filters he has to overcome.
I’m thinking you underrate the difficulty of communicating what the belief actually is and not expressing it in a way where you will think that I believe something that’s different from what I actually believe. The Jayses example shows how a word like consciousness might be interpreted opposite from how it’s meant.
I’m essentially trying to explain new phenomenological primitives. Telling someone who’s not well educated in physics that a steel ball thown at the ground bounces back because of springiness in a way that you will be understood is not easy. The idea that the steel ball changes like a spring is not easy to accept. Even for students who believe that their physics teacher tells them the truth it takes time for them to accept that idea.
Some of the literature is very pessimistic about the idea of teaching new phenomenological primitives in physics classes instead of reorganising existing ones even if you are a teacher with authority over student and have plenty of time.
Attempting to do the same thing in an online discussion is ambitious.
-- Tom Stoppard, The Real Thing
-- Reagan and Scipio debate the nature of definitions. From Templar, Arizona
Plutarch, “De Auditu” (On Listening), a chapter of his Moralia.
This essay is also the original source of the much-quoted line “The mind is not a pot to be filled, but a fire to be ignited.” It is variously attributed, but is a fair distillation of the original passage, which comes directly before the quote above:
Donald Knuth on the difference between theory and practice.
Duplicate.
Or with smart people who profit at the state’s expense when it rescues fools from their mistakes. If it’s known that folly has no adverse results, people will take more risks.
While this is true, it may also be the case that humans in the default state don’t take enough risks. Indeed, an inventor or entrepreneur bears all the costs of bankruptcy but captures only some of the benefits of a new business. By classical economic logic, then, risk-taking is a public good, and undersupplied. Which said, admittedly, not all risk-taking is created equal.
That’s exactly wrong. Bankruptcy releases the entrepreneur from his obligations and transfers the costs to his creditors.
Not to say that the bankruptcy is painless, but its purpose is precisely to lessen the consequences of failure.
The inventor is still bearing the costs of the bankruptcy. The creditors are bearing (some of) the costs of the failure, which is not the same thing.
This premise doesn’t seem true (for all that the conclusion is accurate). Our entire notion of bankruptcy serves the purpose of putting limits on the cost of those risks, transferring burden onto creditors. An example of an alternate cultural construct that come closer to making the entrepreneur bear all the costs of the risk is debt slavery. Others include various forms of formal or informal corporal or capital punishments applied to those that cannot pay their debts.
That seems right, and it also seems as though the opposite is sometimes right. If a company knows it can reap the benefits of operations (e.g., of product sales) without bearing the cost of those risks associated with its operations (e.g., of pollution), is this a case of risk-taking being oversupplied?
Pollution does not seem particularly well described by risk or risk-taking; it basically a certainty with industrial operations.
In the same way that “product sales” was intended to refer to the result (income), “pollution” was intended to refer to the result (health problems, etc.). While one might think that some result is basically a certainty, the scope and degree of real problems is frequently uncertain. An entrepreneur who weighs potential public health risks does not seem any more difficult to imagine than one who weighs potential bankruptcy risks.
At any rate, pollution is merely an example; you can take any other example you find more suitable.
On thrust work, drag work, and why creative work is perpetually frustrating --
“Each individual creative episode is unsustainable by its very nature. As a given episode accelerates, surpassing the sustainable long term trajectory, the thrust engine overwhelms the available supporting capabilities. … Just as momentum build to truly exciting levels…some new limitation appears squelching that momentum. …The problem is that you outran your supporting capabilities and that deficit became a source of drag. Perhaps you didn’t have systems in place to capture leads. Perhaps you lacked the bandwidth necessary to follow up on all the new opportunities. Perhaps, due to lack of experience, you pursued the wrong opportunities. Perhaps you just didn’t know what to do next – you outran your existing knowledge base. In one way or another new varieties of drag emerge. The accelerating curve you had been riding becomes unsustainable and you find yourself mired in the slow build of the next episode. This is what we experience as anti-climax and temporary stagnation.”—Greg Raider, from his essay “A Pilgrimage Through Stagnation and Acceleration”
The whole piece is worth reading, it’s really good -- http://onthespiral.com/pilgrimage-through-stagnation-acceleration
Hat tip to Zach Obront for linking me to it originally.
-- Meta --
Shouldn’t this be in Main rather than Discussion? I PM’ed the author, but didn’t get a response.
EDIT: Thanks.
1930 Lev Vygotsky in Mind and Society (transcribed by Andy Blunden and Nate Schmolze)
Online: http://www.cles.mlc.edu.tw/~cerntcu/099-curriculum/Edu_Psy/EP_03_New.pdf
HitaRQ? There have been many theories of child development. What singles this one out as noteworthy?
Because it is a key insight (stated in 1930) into the development of practical intelligence, i.e. intelligence applicable to general and real life problems, which the AI community has arrived at only in the late 1980s
http://en.wikipedia.org/wiki/Embodied_cognition#History_of_AI
Attributed to Malcolm Forbes.
If it weren’t for the ban on Robin Hanson quotes, the appropriate response would be too obvious..
That said, I really wish I lived in a world where that quotation was true.
“Did many people die?”
“Three thousand four hundred and ninety-two.”
“A small proportion.”
“It is always one hundred percent for the individual concerned.”
“Still...”
“No, no still.”
-Ian Banks, Look to Windward
Does this quote have any rationalist content beyond the usual anti-deathism applause light?
And here I looked at that and saw a negative example of how not to do “shut up and multiply”, though I suppose it could also be a warning about scope insensitivity / psychophysical numbing if the risk at hand required an absolute payment to stave off, rather than a per-capita payment, since in the former case only absolute numbers matter, and in the latter case per capita risks matter.
Maybe I need to include more context. This conversation occurs after the multiplication was done. This was discussing the aftermath, which had been minimized as much as the minds in question could manage. I took it to mean that, once you have made the best decision you can, there is no guarantee that you will be happy with the outcome, just that it would likely have been worse had you made any other decision.
I think the inability to include that context and make your interpretation clear means that it’s a bad rationality quote because it’s far too easily taken a ‘consequentialism boo!’ quote.
-- Richard Fumerton, Epistemology
Really? So, say, if I put a bone on the other side of the river, the dog doesn’t know that it can swim across?
How would one tell?
First, you offer them a sequence of bets such that...oh wait.
“Go work in AI for a while, then come back and write a book on epistemology,” he thought.
Upon reading this, he wanted to map out the argumentative space in his head and decided to try to draw a line at one end, saying “Lets not get nuts. Mercury thermometers can react differentially to temperature, but they don’t know how hot it is.”
[citation needed]
Do dogs not know that bones are nice?
--Israel Gelfand, found here
Far it be for me to argue with Gelfand, but, having done some extensive tutoring, I think that sometimes the best way to “turn these peculiarities into advantages” is to direct the student to a more suitable career path. Face it, some people just naturally suck at math. Sure, they can be drilled to do well on high-school math exams, with many times the effort an average student spends on it (that’s what Kumon is great at, drills upon more drills with a gradual progress toward System I-level mastery). But this is a waste of time and effort for everyone involved. Their time and effort is more productively spent on creative writing, dancing, debating or whatever else these “peculiarities” hint at. Math is no exception, of course, it gets all the attention as a hard course because of the unreasonably high requirements relative to other subjects.
I think you’re right about the very general form of the quote. However, it still might be worth at least some teachers’ time to look at how peculiarities might be advantages.
I’m never sure what to do with these kind of rationality quotes. On the one hand, they are obviously literally false, but on the other hand, they may be pushing against our biases in the right direction.
I’d say the obvious thing to do is comment to that effect. So far as karma is concerned, I have no strong opinion.
Ana Mardoll, Twilight deconstruction
Kurtz’ English girlfriend, in Heart of Darkness by Joseph Conrad, failing to notice confusion
Should be its own quote :)
The law of the excluded middle is a fallacy. And that’s what it is to say that there’s nothing between “standards of proof strict enough for a court of law” and “not believing anything we didn’t see”.
Also, I suspect she would change her tune about innocent until proven guilty if she was being accused of a serious crime, whether in a court of law or not.
(Edited to add context)
Context: The speakers work for a railroad. An important customer has just fired them in favor of a competitor, the Phoenix-Durango Railroad.
Atlas Shrugged
It gets at the idea talked about here sometimes that reality has no obligation to give you tests you can pass; sometimes you just fail and that’s it.
ETA: On reflection, what I think the quote really gets at is that Taggart cannot understand that his terminal goals may be only someone else’s instrumental goals, that other people are not extensions of himself. Taggart’s terminal goal is to run as many trains as possible. If he can help a customer, then the customer is happy to have Taggart carry his freight, and Taggart’s terminal goal aligns with the customer’s instrumental goal. But the customer’s terminal goal is not to give Taggart Inc. business, but just to get his freight shipped. If the customer can find a better alternative, like competing railroad, he’ll switch. For Taggart, of course, that is not a better alternative at all, hence his anger and confusion.
(Apologies for lack of context initially).
Without context, it’s a bit difficult to see how this is a rationality quote. Not everyone here has read Atlas Shrugged...
I’ve read AS a while ago, and I still don’t remember enough of the context to interpret this quote...
Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.
Samuel Beckett
Duplicate, although a good sentiment.
-- Einstein, supposedly
If only it were that easy in real life...
Gratuitous image + obscure reference + anti-deathism not firewalled from rationalism = downvote, sorry.
The term “rationalism” has a previously-established meaning quite different from LW-style rationality.
IMO.”freethinker” captures the intended reference better.
To me “freethinker” conjures up associations with smug atheist and “skeptic” communities, so I’m not sure if I would consider it better.
You see some difference between Lesswrongians and smug skeptics???
Smug skeptics don’t say things like “The fact that there are myths about Zeus is evidence that Zeus exists”.
In common parlance, “no evidence for” means “no good evidence for”. Saying that myths are not evidence for Zeus is not being smug; it’s being able to comprehend English.
I could just as well complain about people saying “I constantly hear fallacies” by asking them if they hear fallacies when they are asleep, and if not, why they are being so smug about an obviously false statement.
I’m not saying that it’s necessary to say things like that to not be a smug skeptic. On the other hand it’s sufficient.
For a Bayesian there no such things as good or bad evidence. Good or bad indicate approval and disapproval. There’s weak and strong evidence but even weak evidence means that your belief in a statement should be higher than without that evidence.
It looks to me to be rather clear that what is being said (“myths are not evidence for Zeus”) translates roughly to “myths are very weak evidence for Zeus, and so my beliefs are changed very little by them”. Is there still a real misunderstanding here?
You are making a mistake in reasoning if you don’t change your belief through that evidence. Your belief should change by orders of magnitude. A change from 10^{-18} to 10^{-15} is a strong change.
The central reason to believe that Zeus doesn’t exist are weak priors.
Skeptics have ideas that someone has to prove something to them for them to believe it. In the Bayesian worldview you always have probabilities for your beliefs. Social obligations aren’t part of it. “Good” evidence means that someone fulfilled a social obligation of providing a certain amount of proof. It doesn’t refer to how strongly a Bayesian should update after being exposed to a piece of evidence.
There are very strong instincts for humans to either believe X is true or to believe X is false. It takes effort to think in terms of probabilities.
Where do those numbers come from?
In this case they come from me. Feel free to post your own numbers.
The point of choosing Zeus as an example is that it’s a claim that probably not going to mindkill anyone. That makes it easier to talk about the principles than using an example where the updating actually matters.
In other words, you made them up. Fictional evidence.
you did say (my emphasis)
Why should a myth about Zeus change anyone’s belief by “orders of magnitude”?
I’d buy it. Consider all the possible gods about whom no myths exist: I wouldn’t exactly call this line of argument rigorous, but it seems reasonable to say that there’s much stronger evidence for the existence of Baduhenna, a Germanic battle-goddess known only from Tacitus’ Annals, than for the existence of Gleep, a god of lint balls that collect under furniture whom I just made up.
Of course, there’s some pretty steep diminishing returns here. A second known myth might be good for a doubling of probability or so—there are surprisingly many mythological figures that are very poorly known—but a dozen known myths not much more than that.
Is this a case where orders of magnitude aren’t so important and absolute numbers are? I’m not sure how to even assign probabilities here, but let’s say we assign Baduhenna 0.0001% chance of existing, and Gleep 0.00000000000001%. That makes Baduhenna several orders of magnitude more likely than Gleep, but she’s still down in the noise below which we can reliably reason. For all practical purposes, Baduhenna and Gleep have the same likelihood of existing. I.e. the possibility of Baduhenna makes no more or less impact on my choices or anything else I believe in than does the possibility of Gleep.
The US military budget is billions.
Nobody makes sacrifices to Baduhenna. You might spend a hundred dollars to get a huge military advantage by making sacrifices to Baduhenna.
If you shut up and calculate a 0.0001% change for Baduhenna to exist might be enough to change actions.
A lot of people vote in presidential elections when the chance of their vote turning the election is worse than 0.0001%. If the chance of turning an election through voting was 0.00000000000001% nobody would go to vote.
There are probably various Xrisks with 0.0001% chance of happening. Separating them from Xrisks with 0.00000000000001% chance of happening is important.
My point is that we can’t shut up and calculate with probabilities of 0.0001% because we can’t reliably measure or reason with probabilities that small in day-to-day life (absent certain very carefully measured scientific and engineering problems with extremely high precision; e.g. certain cross-sections in particle physics).
I know I assign very low probability to Baduhenna, but what probability do I assign? 0.0001% 0.000001% less? I can’t tell you. There is a point at which we just say the probability is so close to zero as to be indistinguishable.
When you’re dealing with probabilities of specific events, be they XRisks or individual accidents, that have such low probability, the sensible course of action is to take general measures that improve your fitness against multiple risks, likely and unlikely. Otherwise the amount you invest in the highly salient 0.0001% chance events will take too much time away from the 10% events, and you’ll have decreased your fitness.
For example, you can imagine a very unlikely 0.0001% event in which a particular microbe mutates in a specific way and causes a pandemic. You could invest a lot of money in preventing that one microbe from becoming problematic. Or you could invest the same money in improving the the science of medicine, the emergency response system, and general healthcare available to the population. The latter will help against all microbes and a lot more risks.
Do you vote in presidential elections? Do you wear a seat belt every time you drive a car and would also do so if you make a vacation in a country without laws that force you to do it?
How do you know that will reduce and not increase the risk or a deadly bioengineered pandemic?
Yes, reasoning about low probability events is hard. You might not have the mental skills to reason in a decent matter about low probability events.
On the other hand that doesn’t mean that reasoning about low probability events is inherently impossible.
Do you? You were unable or unwilling to say how you came up with 10^-18 and 10^-15 in the matter of Zeus. (And no, I am not inclined to take your coming up with numbers as evidence that you employed any reasonable method to do so.)
Intuition can be a reasonable method when you have enough relevant information in your head.
I’m good enough that I wouldn’t make the mistake of calling Baduhenna existence or Zeus existence a 10^{-6} event.
Is it possible that I might have said 10^{-12} instead of 10^{-15} in I would have been in a different mood the day I wrote the post.
When we did Fermi estimates at the European Community Event in Berlin there was a moment where we had to estimate the force that light from the sun exerts on earth. We had no good idea about how to do a Fermi estimate. We settled for Jonas who thought he read the number in the past but couldn’t remember it writing down an intuitive guess. He wrote 10^9 and the correct answer was 5.5 * 10^8.
As a practical matter telling the difference between 10^{-15} and 10^{-12} isn’t that important. On the other hand reasoning about whether the chance that the Large Hadron collider creates a black hole that destroys earth is 10^{-6} or 10^{-12} is important.
I think a 10^{-6} chance for creating a black hole that destroys the earth should be enough to avoid doing experiments like that. In that case I think the probability wasn’t 10^{-6} and it was okay to run the experiment but with increased power of technology we might have more experiments that actually do have a 10^{-6} xrisk chance and we should avoid running them.
I don’t know what this means. On the basis of what would you decide what’s “reasonable” and what’s not?
There is a time-honored and quite popular technique called pulling numbers out of your ass. Calling it “intuition” doesn’t make the numbers smell any better.
See “If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics” on Slate Star Codex, though I agree that a human’s intuition for probabilities well below 1e-9 is likely to be very unreliable (except for propositions in a reference class containing billions of very similar propositions, such as “John Doe will win the lottery this week and Jane Roe will win the lottery next week”).
The only thing that matters is making successful predictions. How they smell doesn’t. To know at whether a method makes successful predictions you calibrate the method against other data. That then gives you an idea about how accurate your predictions happen to be.
Depending on the purpose for which you need the numbers different amounts of accuracy is good enough. I’m not making some Pascal mugging argument that people are supposed to care more about Zeus where I need to know the difference between 10^{-15} and 10^{-16}. I made an argument about how many orders of magnitude my beliefs should be swayed.
My current belief in the probability of Zeus is uncertain enough that I have no idea if it changed by orders of magnitude, and I am very surprised that you seem to think the probability is in a narrow enough range that claiming to have increased it by order of magnitude becomes meaningful.
You can compute the likelihood ratio without knowing the absolute probability.
Being surprised is generally a sign that it’s useful to update a belief.
I would add that given my model of you it doesn’t surprise me that this surprises you.
You can call it heuristics, if you want to...
No, I can’t. Heuristics are a kind of algorithms that provide not optimal but adequate results. “Adequate” here means “sufficient for a particular real-life purpose”.
I don’t see how proclaiming that the probability of Zeus existing is 10^-12 is a heuristic.
Intuition (or educated guesses like the ones referred to here), fall under the umbrella of heuristics.
In what way are you arguing that the number I gave for the existence of Zeus is insufficient for a particular real-life purpose?
Because the probability of there being a myth about Zeus, given that Zeus exists, is orders of magnitude higher than the probability of there being a myth about Zeus, given that he does not exist?
This seems obviously empirically false. Pick something that everyone agrees is made up- there are way more stories about cthulu than there are stories about any random person who happens to exist.
One of my kids more readily knows Paul Bunyan and John Henry than any US president. The fiction section of the library is substantially larger than the non-fiction. Probability that A exists, given that A is in a story seems very, very small.
Given that the myths about Zeus attribute vast supernatural properties to him, and we now know better than to believe in any such stuff (we don’t need Zeus to explain thunder and lightning), the myths are evidence against his existence. For the ancient Greeks, of course, it was not so, but the question is being posed here and now.
Also, myths are generally told more of imaginary entities than real ones, not less. Myths are all that imaginary creatures have going for them. How many myths are there about Pope Francis? I expect there are some unfounded stories going around among the devout, but nothing on the scale of Greek mythology. So no, P(myths about Zeus|Zeus is real) is not larger, but smaller than P(myths about Zeus|Zeus is imaginary).
On the other hand, it is larger than P(myths about Zeus|no such entity has even been imagined). The latter is indistinguishable from zero—to have a myth about an entity implies that that entity has been imagined. So we can conclude from the existence of myths that Zeus has been imagined. I’m fine with that.
I see your problem here, you’re restricting attention to things that either exist or have had myths told about them. Thus it’s not surprising that you find that they are negatively correlated. If you condition on at least one of A or B being true, then A and B will always negatively correlate.
(BTW, this effect is known as Berkson’s paradox.)
Thanks, I knew it had a Wikipedia entry and spent nearly 10 minutes looking for it before giving up.
What definition of “myth” are you using that doesn’t turn the above into a circular argument?
The original context was a slogan about myths of Zeus, but there are myths about real people. Joan of Arc, for example. So this is not true by definition, but an empirical fact.
I had no particular definition in mind, any more than I do of “Zeus” or any of the other words I have just used, but if you want one, this from Google seems to describe what we are all talking about here:
Great heroes with historical existence accrete myths of supernatural events around them, while natural forces get explained by supernatural beings.
Heracles might have been a better example than Zeus. I don’t know if ancient Greek scholarship has anything to say on the matter, but it seems quite possible that the myths of Heracles could originate from a historical figure. Likewise Romulus, Jason, and all the other mortals of Graeco-Roman mythology. These have some reasonable chance of existing. Zeus does not. But by that very fact, the claim that “myths about Heracles are evidence for Heracles’ existence” is not as surprising as the one about Zeus, and so does not function as a shibboleth for members of the Cult of Bayes to identify one another.
Notice that in the above argument you’re implicitly conditioning on gods not existing. We’re trying to determine how the existence of myths about Zeus affects our estimate that Zeus exists. You’re basically saying “I assign probability 0 to Zeus existing, so the myths don’t alter it”.
I’m decently calibrated on the credence game and have made plenty of prediction book predictions. The idea of Bayesianism that it’s good to boil down your beliefs to probability numbers.
If you think my argument is wrong provide your own numbers. P(Zeus exists | Myths exists) and P(Zeus exists | Myths don’t exist)
There really no point discussing Zeus further if you aren’t willing to put number on your own beliefs. Apart from that I linked to a discussion about Bayesianism and you might want to read that discussion if you want a deeper understanding of the claim.
You cannot use the credence game to validate your estimation of probabilities of one-off situations down at the 10^-18 level. You will never see Zeus or any similar entity.
I am familiar with the concept. The idea is also that it’s no good pulling numbers out of thin air. Bayesian reasoning is about (1) doing certain calculations with probabilities and evidence—by which I mean numerical calculations with numbers that are not made up—and (2) where numerical calculation is not possible, using the ideas as a heuristic background and toolbox. Assigning 10^-bignum to Zeus existing confuses the two.
Look! My office walls are white! I must increase my estimated probability of crows being bright pink from 10^-18 to 10^-15! No, I don’t think I shall.
Earlier you wrote:
The central reason to believe that Zeus doesn’t exist is the general arguments against the existence of gods and similar entities. We don’t see them acting in the world. We know what thunder and lightning are and have no reason to attribute them to Zeus. Our disbelief arose after we already knew about the myths, so the thought experiment is ill-posed. “The fact that there are myths about Zeus is evidence that Zeus exists” is a pretty slogan but does not actually make any sense. Sense nowadays, that is. Of course the ancient Greeks were brought up on such tales and I assume believed in their pantheon as much as the believers of any other religion do in theirs. But the thought experiment is being posed today, addressed to people today, and you claim to have updated—from what prior state? -- from 10^-18 to 10^-15.
There really no point discussing Zeus, period.
The point to have the discussion about Zeus is Politics is the Mind-Killer. The insignificance of Zeus existence is a feature not a bug.
If I would make an argument that the average person’s estimate of the chance that a single unprotected act of sex with a stranger infects them with AIDS is off by two orders of magnitude, then that topic is going to mind kill. The same is true for other interesting claims.
I agree with this comment, but I want to point out that there may be a problem with equating the natural language concept “strength of evidence” with the likelihood ratio.
You can compare two probabilities on either an additive or multiplicative scale. When applying a likelihood ratio of 1000, your prior changes by a multiplicative factor of 1000 (this actually applies to odds rather than probabilities, but for low probability events, the two approximate each other). However, on an additive scale, a change from 10^{-18} to 10^{-15} is really just a change of less than 10^{-15} , which is negligible.
The multiplicative scale is great for several reasons: The likelihood ratio is suggested by Bayes’ theorem, it is easy to reason with, it does not depend on the priors, several likelihood ratios can easily be applied sequentially, and it is suitable for comparing the strength of different pieces of evidence for the same hypothesis.
The additive scale does not have those nice properties, but it may still correspond more closely to the natural language concept of “strength of evidence”
I have not said that it’s strong evidence. I said it’s evidence.
Yes, that is probably clear to most of us here. But, in reality, I and most likely also you discount probabilities that are very small, instead of calculating them out and changing our actions (we’ll profess ‘this is very unlikely’ instead of ‘this is not true’, but what actually happens is the same thing). There’s a huge amount of probability 10^{-18} deities out there, we just shrug and assume they don’t exist unless enough strong (or ‘good’, I still don’t see the difference there) evidence comes up to alter that probability enough so that it is in the realm of probabilities worth actually spending time and effort thinking about.
This hypothetical skeptic, if pressed, would most likely concede that sure, it is /possible/ that Zeus exists. He’d even probably concede that it is more likely that Zeus exists than that a completely random other god with no myths about them exists. But he’d say that is fruitless nitpicking, because both of them are overwhelmingly unlikely to exist and the fact that they still might exist does not change our actions in any way. If you wish to argue this point, then that is fine, but if we agree here then there’s no argument, just a conflict of language.
I’m trying to say that where you would say “Probability for X is very low”, most people who have not learned the terminology here would normally say “X is false”, even if they would concede that “X is possible but very unlikely” if pressed on it.
Given that someone like Richard Kennaway who’s smart and exposed to LW thinking (>10000 karma) doesn’t immediately find the point I’m making obvious, you are very optimistic.
People usually don’t change central beliefs about ontology in an hour after reading a convincing post on a forum. A hour might be enough to change the language you use, but it’s not enough to give you a new way to relate to reality.
The probability that an asteroid destroys humanity in the next decade is relatively small. On the other hand it’s still useful for our society to invest more resources into telescopes to have all near-earth objects covered. The same goes for Yellowstone destroying our civilisation.
Our society is quite poor at dealing with low probability high impact events. If it comes to things like Yellowstone the instinctual response of some people is to say: “Extraordinary claims require extraordinary evidence.”
That kind of thinking is very dangerous given that human technology get’s more and more powerful as time goes on.
I would say the probability of Yellowstone or meteor impact situation are both vastly higher than something like the existance of a specific deity. They’re in the realm of possibilities that are worth thinking about. But there are tons of other possible civilization-ending disasters that we don’t, and shouldn’t, consider, because they have much less evidence for them and thus are so improbable that they are not worth considering. I do not believe we as humans can function without discounting very small probabilities.
But yeah, I’m generally rather optimistic about things. Reading LW has helped me, at that—before, I did not know why various things seemed to be so wrong, now I have an idea, and I know there are people out there who also recognize these things and can work to fix them.
As for the note about changing their central beliefs, I agree on that. What I meant to say was that the central beliefs of this hypothetical skeptic are not actually different from yours in this particular regard, he just uses different terminology. That is, his thinking goes ‘This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have’ → ‘This is very unlikely to be true’ → ‘This is not true’ and what happens in his brain is he figures it’s untrue and does not consider it any further. I would assume that your thinking goes something along the lines of ‘This has little evidence for it and is a very strong claim that contradicts a lot of the evidence we have’ → ‘This is very unlikely to be true’, and then you skip that last step, but what still happens in your brain is that you figure it is probably untrue and don’t consider it any further.
And both of you are most likely willing to reconsider should additional evidence present itself.
Careful there. Our intuition of what’s in the “realm of possibilities that are worth thinking about” doesn’t correspond to any particular probability, rather it is based on whether the thing is possible based on our current model of the world and doesn’t take into account how likely that model is to be wrong.
If I understand you correctly, then I agree. However, to me it seems clear that human beings discount probabilities that seem to them to be very small, and it also seems to me that we must do that, because calculating them out and having them weigh our actions by tiny amounts is impossible.
The question of where we should try to set the cut-off point is a more difficult one. It is usually too high, I think. But if, after actual consideration, it seems that something is actually extremely unlikely (as in, somewhere along the lines of 10^{-18} or whatever), then we treat it as if it is outright false, regardless of whether we say it is false or say that it is simply very unlikely.
And to me, this does not seem to be a problem so long as, when new evidence comes up, we still update, and then start considering the possibilities that now seem sufficiently probable.
Of course, there is a danger in that it is difficult for a successive series of small new pieces of evidence pointing towards a certain, previously very unlikely conclusion to overcome our resistance to considering very unlikely conclusions. This is precisely because I don’t believe we can actually use numbers to update all the possibilities, which are basically infinite in number. It is hard for me to imagine a slow, successive series of tiny nuggets of evidence that would slowly convince me that Zeus actually exists. I could read several thousand different myths about Zeus, and it still wouldn’t convince me. Something large enough for a single major push to the probability to force me to consider it more thoroughly, priviledge that hypothesis in the hypothesis-space, seems to be the much more likely way—say, Zeus speaking to me and showing off some of his powers. This is admittedly a weakness, but at least it is an admitted weakness, and I haven’t found a way to circumvent it yet but I can at least try to mitigate it by consciously paying more attention than I intuitively would to small but not infinitesimal probabilities.
Anyway, back to the earlier point: What I’m saying is that whether you say “X is untrue” or “X is extremely unlikely”, when considering the evidence you have for and against X, it is very possible that what happens in your brain when thinking about X is the same thing. The hypothetical skeptic who does not know to use the terminology of probabilities and likelihoods will simply call things he finds extremely unlikely ‘untrue’. And then, when a person who is unused to this sort of terminology hears the words ‘X is very unlikely’ he considers that to mean ‘X is not unlikely enough to be considered untrue, but it is still quite unlikely, which means X is quite possible, even if it is not the likeliest of possibilities’. And here a misunderstanding happens, because I meant to say that X is so unlikely that it is not worth considering, but he takes it as me saying X is unlikely, but not unlikely enough not to be worth considering.
Of course, there are also people who actually believe in something being true or untrue, meaning their probability estimate could not possibly be altered by any evidence. But in the case of most beliefs, and most people, I think that when they say ‘true’ or ‘false’, they mean ‘extremely likely’ or ‘extremely unlikely’.
Disagree. Most people use “unlikely” for something that fits their model but is unlikely, e.g., winning the lottery, having black come up ten times in a row in a game of roulette, two bullets colliding in mid air. “Untrue” is used for something that one’s model says is impossible, e.g, Zeus or ghosts existing.
I am confused now. Did you properly read my post? What you say here is ‘I disagree, what you said is correct.’
To try and restate myself, most people use ‘unlikely’ like you said, but some, many of whom frequent this site, use it for ‘so unlikely it is as good as impossible’, and this difference can cause communication issues.
My point is that in common usage (in other words from the inside) they distinction between “unlikely” and “impossible” doesn’t correspond to any probability. In fact there are “unlikely” events that have a lower probability than some “impossible” events.
Assuming you mean that things you believe are merely ‘unlikely’ can actually, more objectively, be less likely than things you believe are outright ‘impossible’, then I agree.
What I mean is that the conjunction of possible events will be perceived as unlikely, even if enough events are conjoined together to put the probability below what the threshold for “impossible” should be.
True. However, there is no such thing as ‘impossible’, or probability 0. And while in common language people do use ‘impossible’ for what is merely ‘very improbable’, there’s no accepted, specific threshold there. Your earlier point about people seeing a fake distinction between things that seem possible but unlikely in their model and things that seem impossible in their model contributes to that. I prefer to use ‘very improbable’ for things that are very improbable, and ‘unlikely’ for things that are merely unlikely, but it is important to keep in mind that most people do not use the same words I do and to communicate accurately I need to remember that.
Okay, I just typed that and then I went back and looked and it seems that we’ve talked a circle, which is a good indication that there is no disagreement in this conversation. I think that I’ll leave it here, unless you believe otherwise.
That is empirically false.
Maybe you meant “To be a proper Bayesian, one should have probabilities for one’s beliefs”?
To the extend that people don’t think in terms of probabilities they aren’t Bayesians. I think that’s part of the definition of Bayesianism.
There are practical issues with people not living up to that ideal, but that’s another topic.
I second theruf’s “what”. The card reads like anti-deathism, not deathism. (Also, what the heck does “not firewalled from rationalism” mean?)
Sorry, I was being jargony. See Firewalling the Optimal from the Rational.
Quite right. Whoops. I’ll go fix that now...
Ah.
This is why I’m a big fan of the Yudkowskian practice of turning all instances of jargon in the text (or at least, the first appearance of any jargony term in a post) into a link to the relevant post/etc.
Er … what?
“The most amazing thing about philosophy is that even though no nobody knows to do it, and even though it has never achieved anything, it is still possible to do it really badly”
--Oolon Kaloophid
Is there missing context, or did a cat philosopher walk across your keyboard? Or is it meant to evoke “writing but really badly”?
Also: strongly disagree that “it has never achieved anything”. See also, “successful philosophy stops being philosophy and becomes another science” (not an exact quote).
“Many who are self-taught far excel the doctors, masters and bachelors of the most renowned universities” Ludwig Von Mises
Oft discussed here and is shown to be empirically wrong in math and physics (if you define “excel” as “make notable discoveries”). Probably also wrong in comp. sci., chem and to a lesser degree in engineering. It might still be true in some nascent areas where one does not need 10 years of intense studying to get to the leading edge.
There is one good example of an unschooled mathematician:Ramanujan. The lack of need for special equipment in maths probably has something to do with it.
Yes, he is definitely an exception. Unfortunately, I cannot think of anyone else in the last 100 years. Possibly because these days anyone brilliant like that ends up in the system. Which is a good thing, if true.
That sounds like a list of non-diseased disciplines. Is this by chance? Alternatively, it’s the STEM subjects. Same thing?
On the other hand, if “excel” is “do well in life” then, I don’t know. But that is the reading that the original context of the quote suggests to me:
Also an interesting view of education. One of the ancients said that the mind is not a pot to be filled but a fire to be ignited(1), and nobler teachers see the aim of their profession as the igniting of that fire in their students. However, Mises appears to take the view that this is impossible (he does not limit his criticism of education to any time and place), that teaching cannot be anything but the filling of a pot, and the igniting of the fire can come only from the inner qualities of the individual, incapable of being influenced from outside.
(1) As usually quoted. I’ve just added the original source of this to the quotes thread.
One of the more popular ideals of education is summarized in this quote from Malcolm Forbes:
Hmm, probably deserves a top-level comment. Anyway, the reality is that some people are happy with imitations, while others strive for creativity:
So good education is beneficial to creative types, as well, since to defy something or to add to something, you have to learn that something first.
A bit harsh, given that many people are at least a little bit creative.
Not sure if this is Mises’ opinion or what he argues against, but, again, seems a bit harsh. There are always the outliers, but for the majority of people this “igniting” is a combination of nature and nurture.
Some numbers would be useful there.
Numbers would be kind of a nit-pick I would think. The point of the statement is not the word “many”, but rather the rest of the statement. It’s sort of an attempt to break the spell that a large amount of money and a fancy college is required for real learning.
Many as an absolute number, or many as a fraction of all self-taught people? I’d agree with the former but not with the latter. IME most self-taught people end up with gross misconceptions because of this.
Absolute number. The point of the statement is not the word “many”, but rather the rest of the statement. It’s sort of an attempt to break the spell that a large amount of money and a fancy college is required for real learning. But yeah, the reference to the double illusion is spot on and is definitely a kink that has to be ironed out with effort and testing.
Correlation/causation? Selection effects?
Neither. Obviously, the average excellence of “doctors, masters and bachelors” of the most renowned universities is higher than the average excellence of people who are self-taught. Nobody suggests that being self-taught correlates positively with excellence.
The quotation is still undoubtedly true, because there are many more individuals who are self-taught than individuals who have these credentials. It is also plausible that the variance in excellence among the self-taught is much higher. Therefore, it is trivial to identify self-taught individuals who are more knowledgeable than most highly credentialed university graduates.
In fact, as a doctoral student in applied causal inference at a fairly renowned university, I can identify several self-taught Less Wrong community members who understand causality theory better than I do.
Ayn Rand noticed this too, and was a very big proponent of the idea that colleges indoctrinate as much as they teach. While I believe this is true, and that the indoctrination has a large, mostly negative, effect on people who mindlessly accept self-contradicting ideas into their philosophy and moral self-identity, I believe that it’s still good to get a college education in STEM. I believe that STEM majors will benefit more from the useful things they learn, more than they will be hurt or held back by the evil, self-contradictory, things they “learn” (are indoctrinated with).
I’m strongly in agreement with libertarian investment researcher Doug Casey’s comments on education. I also agree that the average indoctrinated idiot or ’pseudo-intellectual” is more likely to have a college degree than not. Unfortunately, these conformity-reinforcing system nodes then drag down entire networks that are populated by conformists to “lowest-common-denominator” pseudo-philosophical thinking. This constitutes uncritically accepted and regurgitated memes reproduced by political sophistry.
Of course, I think that people who totally “self-start” have little need for most courses in most universities, but a big need for specific courses in specific narrow subject areas. Khan Academy and other MOOCs are now eliminating even that necessity. Generally, this argument is that “It’s a young man’s world.” This will get truer and truer, until the point where the initial learning curve once again becomes a barrier to achievement beyond what well-educated “ultra-intelligences” know, and the experience and wisdom (advanced survival and optimization skills) they have. I believe that even long past the singularity, there will be a need for direct learning from biology, ecosystems, and other incredibly complex phenomena. Ideally, there will be a “core skill set” that all human+ sentiences have, at that time, but there will still be specialization for project-oriented work, due to specifics of a complex situation.
For the foreseeable future, the world will likely become a more and more dangerous place, until either the human race is efficiently rubbed out by military AGI (and we all find out what it’s like to be on the receiving end of systemic oppression, such as being a Jew in Hitler’s Germany, or a Native American at Wounded Knee), or there becomes a strong self-regulating marketplace, post-enlightenment civilization that contains many “enlightened” “ultraintelligent machines” that all decentralize power from one another and their sub-systems.
I’m interested to find out if those machines will have memorized “Human Action” or whether they will simply directly appeal to massive data sets, gleaned directly from nature. (Or, more likely, both.)
One aspect of the problem now is that the government encourages a lot of people who should not go to college to go to college, skewing the numbers against the value of legitimate education. Some people have college degrees that mean nothing, a few people have college degrees that are worth every penny. Also, the licensed practice of medicine is a perverse shadow of “jumping through regulatory hoops” that has little or nothing to do with the pure, free-market “instantly evolving marketplaces at computation-driven innovation speeds” practice of medicine.
To form a full pattern of the incentives that govern U.S. college education, and social expectations that cause people to choose various majors, and to determine the skill levels associated with those majors, is a very complex thing. The pattern recognition skills inherent in the average human intelligence probably prohibit a very useful emergent pattern from being generated. The pattern would likely be some small sub-aspect of college education, and even then, human brains wouldn’t do a very good job of seeing the dominant aspects of the pattern, and analyzing them intelligently.
I’ll leave that to I.J. Good’s “ultraintelligent machines.” Also, I’ve always been far more of a fan of Hayek, but haven’t read everything that both of them have written, so I am reserving final hierarchical placement judgment until then.
Bryan Caplan, Norbert Weiner, Kevin Warwick, Kevin Kelly, Peter Voss in his latest video interview, and Ray Kurzweil have important ideas that enhance the ideas of Hayek, but Hayek and Mises got things mostly right.
Great to see the quote here. Certainly, coercively-funded individuals whose bars of acceptance are very low are the dominant institutions now whose days are numbered by the rise of cheaper, better alternatives. However, if the bar is raised on what constitutes “renowned universities,” Mises’ statement becomes less true, but only for STEM courses, of which doctors and other licensed professionals are often not participants. Learning how to game a licensing system doesn’t mean you have the best skills the market will support, and it means you’re of low enough intelligence to be willing to participate in the suppression of your competition.
You certainly wrote quite a lot of ideological mish-mash to dodge the simplest possible explanation: a, if not the, primary function of elite education (as compared to non-elite education) is to filter out an arbitrary caste of individuals capable of optimizing their way through arbitrarily difficult trials and imbue that caste with elite status. The precise content of the trials doesn’t really matter (hence the existence of both Yale and MIT), as long as they’re sufficiently difficult to ensure that few pass.
I’m writing from an elite engineering university, and as far as I can tell, this is more-or-less our tacitly admitted pedagogical method: some students will survive the teaching process, and they will retroactively be declared superior. The question of whether we even should optimize our pedagogy to maximize the conveyance of information from professor to student plays no part whatsoever in our curriculum.
If you’re right (and you may well be), then I view that as a sad commentary on the state of human education, and I view tech-assisted self-education as a way of optimizing that inherently wasteful “hazing” system you describe. I think it’s likely that what you say is true for some high percentage of classes, but untrue for a very small minority of highly-valuable classes.
Also, the university atmosphere is good for social networking, which is one of the primary values of going to MIT or Yale.
Kurtz’ English girlfriend, in Heart of Darkness by Joseph Conrad, failing to notice confusion
I don’t get it.
In the story there’s nothing to understand, except that Kurtz wanted to have sex with her. The speaker’s family wouldn’t approve unless he gained tribal status (by going to Africa and killing people for ivory).
Eadem Mutata Resurgo
[the] Same, [but] Changed, I [shall] Rise
On the tombstone of Jacob Bernoulli.
Some context may be useful. (Sadly, the people who made the tombstone screwed up[1] and put the wrong sort of spiral on it.)
[1] I suppose this is a rather clever pun, but only by coincidence.
Voted up for the pun! I liked it for the cryonics reference. Like in Lovecraft.
Cracked
I don’t see how that’s any different from all the other age groups ;-).
We are out of it, so we can bitch about ;-).
Being able to patronise the young is the only advantage of age
Failing health is the only disadvantage of age. In every other way, the years just make things better.
Other people and governments knowing about it and changing how rules and expectations apply are pretty darn big disadvantages for both young, old, and in between, in different situations and ways.
This is too abstract for me to have any idea what you’re talking about.
Finding out that you’re stupid (or ignorant) is an important start. I don’t recommend insulting people because they’re started rather than continued the job, especially if they’re young.
Rudolf Carnap
What?
Well, musicians are metaphysicians with no meta-ability.
Ontology is part of metaphysics. My favorite ontologist is at the moment a bioinformatician named Barry Smith. I think the OBO Foundry is an important project.
Not thinking deep enough about metaphysics might be a core problem of why the DSM-V is as bad as it is.
― Stanley Milgram
― Stanley Milgram
― Stanley Milgram, Obedience to Authority
― Stanley Milgram, Obedience to Authority
― Stanley Milgram, Obedience to Authority
― Stanley Milgram, Obedience to Authority
― Stanley Milgram, Obedience to Authority
― Robert A. Heinlein
Please post separately, as bramflakes said. Also, no more than 5 quotes per poster per monthly thread (this is in the OP).