Open thread, 25-31 August 2014
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
A few months ago I started using the Ultimate Geography Anki deck after performing quite abysmally on some silly geography quiz that was doing the rounds on Facebook. I now know where all the damn countries are, like an informed citizen of the world. This has proven itself very useful in a variety of ways, not least of which is in reading other material with a geographical backdrop. For example, the chapter in Guns, Germs and Steel on Africa is much more readable if you know where all the African countries are in relation to one another.
(In the process of doing this, coupled with an international event in Sweden, I’ve learned that the Scandinavian education systems are much, much better than that of the UK at teaching children about the rest of the world)
The geography deck was particularly easy to slip into because it developed an area I already (weakly) knew about. I’m looking for some new Anki content of a similar nature: a cross-domain-application body of knowledge I probably sort-of know a little bit already, that I can comprehensively improve upon.
Suggestions and anecdotes of similar experiences welcome.
Yep, I find the world a much less confusing place since I learned capitals and location on map. I had (and to some extent still do have) a mental block on geography which was ameliorated by it.
Rundown of positive and negative results:
In a similar but lesser way, I found learning English counties (and to an even lesser extent, Scottish counties) made UK geography a bit less intimidating. I used this deck because it’s the only one on the Anki website I found that worked on my old-ass phone; it has a few howlers and throws some cities in there to fuck with you, but I learned to love it.
I suspect that learning the dates of monarchs and Prime Ministers (e.g. of England/UK) would have a similar benefit in contextualising and de-intimidating historical facts, but I never finished those decks and haven’t touched them in a while, so never reached the critical mass of knowledge that allowed me to have a good handle on periods of British history. I found it pretty difficult to (for example) keep track of six different Georges and map each to dates, so slow progress put me off. Let me know if you’re interested and want to set up a pact, e.g. ‘We’ll both do at least ten cards from each deck a day and report back to the other regularly’ or something. In fact that offer probably stands for any readers.
I installed some decks for learning definitions in areas of math that I didn’t know, but found memorising decontextualised definitions hard enough that I wasn’t motivated to do it, given everything else I was doing and Anki-ing at the time. I still think repeat exposure to definitions might be a useful developmental strategy for math that nobody seems to be using deliberately and systematically, but I’m not sure Anki is a right way to do it. Or if it is, that shooting so far ahead of my current knowledge was the best way to do it. Similarly a LaTeX deck I got having pretty much never used LaTeX and not practising it while learning the deck.
Canadian provinces/territories I have not yet found useful beyond feeling good for ticking off learning the deck, which was enough for me since I did them in a session or two.
Languages Spoken in Each Country of the World (I was trying to do not just country-->languages but country-->languages with proportions of population speaking the languages) was so difficult and unrewarding in the short term that I lost motivation extremely quickly (this was months ago). The mental association between ‘Berber’ and ‘North Africa’ has come up a surprising number of times, though. Most recently yesterday night.
Periodic table (symbol<--->name, name<-->number) took lots of time and hasn’t been very useful for me personally (I pretty much just learned it in preparation for a quiz). Learning just which elements are in which groups/sections of the Periodic table might be more useful and a lot quicker (since by far the main difficulty was name<--->number).
I am relatively often wanting for demographic and economic data, e.g. population of countries, population of major world cities, population of UK places, GDP’s. Ideally I’d not just do this for major places since I want to get a good intuitive sense of these figures for very large or major places on down to tiny places.
Similarly if one has a hobby horse it could be useful. Examples off the top of my head (not necessarily my hobby horse): Memorising the results from the LessWrong surveys. Memorising the results from the PhilPapers survey. Memorising data about resource costs of meat production vs. other food production. Memorising failed AGI timeline predictions. Etc.
I found starting to learn Booker Prize winners on Memrise has let me have a few ‘Ah, I recognise that name and literature seems less opaque to me, yay!’ moments, but there’s probably higher-priority decks for you to learn unless that’s more your area.
What about learning a sense of scale, for both time and space?
planets and stars
replies to most common comments to the previous video
sub-atomic to hypothetical multi-universes—uses pictures and numbers, no zooming. I hadn’t realized how much overlap there is in size between the larger moons and smaller planets, and (in spite of having seen many pictures) hadn’t registered that nebulas are much bigger than stars.
I’m going to post this before I spend a while noodling around science videos, but it might also be good to work on time scales and getting oriented among geological and historical time periods, including what things were happening at the same time in different parts of the world.
I’m an 4th year economics undergrad preparing start applying to PhD programs, and while I’ve never formally attempted to memorize GDPs, I’ve found that having a rough idea of where a county’s per capita GDP is to be very useful in understanding world news and events (for example, I’ve noticed that around the $8,00-12,000 per year range seems to be the point where the median household gets an internet connection). If you do attempt to go the memorization route, be sure to use PPP-adjusted figures, as non-adjusted numbers will tend to systematically under estimate incomes in developing countries.
I did British monarchs last year while on a history kick, (which I’m still on). Pro-tip: watch films, television shows and plays featuring said monarchs, as they include salient contemporary historical events. For example, Nigel Hawthorne was the mad George. Hugh Laurie was his son, the Prince Regent, a contemporary of the Duke of Wellington (Stephen Fry), which places him temporally alongside the Napoleonic wars. Colin Firth was Queen Elizableth II’s stuttering dad in The King’s Speech. His brother was Mike from Neighbours (or the bad guy from Iron Man 3 if you’re under 30) and their dad was Dumbledore.
(It turns out that royal history has plenty of independently interesting features, because it contains a lot of murders and wars and speculation about parentage. Contemporary introductions to historiography emphasise the movement away from history as the deeds of powerful men exercising their will through war and conquest, but the kings and wars are a lot more memorable and easier to place in time than the ephemeral stuff like trade routes and adoption of crops.)
Great effort. If may suggest a topic without providing a deck, I’d say learn about the vocabulary of personal finance. Or more generally, learn the vocabulary of stuff relevant to most lifes from time to time, like medicine, law and, well, finance. This helps to search for the correct things when needed and helps communicating with the relevant professionals.
Attempting to learn medicine vocabulary without understanding the underlying knowledge base is quite hard. Taking a readymade deck of corresponding vocabulary usually leads to attempting to learn something without understanding. That leads to forgotten cards and is ineffective.
Since I don’t have the books around me, I’ll have to write from memory without specific source. It might have been Decisive but I’m not sure.
In the book the authors described the problem of physicians leading patient’s questions like with the complaint “my stomach hurts” they ask “does it hurt here?” pointing to where they feel the pain should be located. The patient, intimidated by the professional in front of them, affirms the question without supplying the information that it also hurts elsewhere. This is not yet a problem of vocabulary, but nonetheless important to keep in mind. The more interesting example was of an old man that described his problem as “feeling dizzy”, so physicians tried to treat him for the syndromes that typically occur at higher ages. After some time a physician actually asked him “when do you feel dizzy?”, receiving the answer “I feel dizzy all the time, when I get up, when I stand in the kitchen, when I read my newspaper.” Turns out what this patient described as “being dizzy” was more something along the lines of “feeling confused” and was a symptom of a as-of-yet undiagnosed depression over his late wife’s passing.
The whole episode could have been avoided if the patient knew how to correctly describe his issue—and if the pphysicians were more conscientious in diagnosing a specific issue.
Learning that there a qualia of “being dizzy” and a qualia of “feeling confused” and learning being able to distinguish those two qualia’s isn’t easy, if you don’t have those qualia in the first place.
Words are cheap once you have the qualia.
Most people have fairly little awareness of what goes on inside their own body. Furthermore these days most doctors also lack the ability to perceive that information kinesthetically but focus on various tests and verbal feedback.
Practically it’s also important to think about what information a doctor actually needs. That means who have to know what’s normal and how you deviate from that.
How about starting with the Greek and Latin roots for medical terms?
Why not then just get a working knowledge of those languages? I had classical education and so I took latin and greek. I also speak some german as well as a working and halting conversational knowledge of french. When you have that you can understand the latin root from a word like ambulate or return.
A working knowledge of the languages is a much larger project. I’m moderately good at figuring out medical terms, but that doesn’t mean I can do more than guessing at translations of text.
The actual medical knowledge is still requires but if you know the roots of things you can learn a great deal. The medical knowledge will give you meaning these can be quickly researched. In this day and time the ability to know a great deal of something requires only simple searches. I make a list of things to search all the time regular reading a research is Aristotelian.
Could you unpack that last sentence? I can’t make sense of it. Aristotelian?
My procedure for coming up with search terms is to vaguely imagine a boring piece of writing about the subject I’m interested in, and then look for the least common words from it. If that doesn’t work, then loosely mull for words inspired by the results that came up in the unsatisfactory searches.
Aristotelian = of Aristotle. Aristotle believed that regular reading, research, and expansion of the mind on various subjects was necessary to a good life. He wrote a great deal about the good life. I suggest adding that to your reading list.
I just take a subject say, “Scoliosis” and just put that into google and see what comes up. I start with the most popular sites and then look to more personal accounts once I know what is or is not scientific about it. For example, I am working on a novel right now and I needed to know how people performed check fraud. So I put that into google and started to read and eventually found a book by a detective about different cases he had solved. That helped me create a scenario that was very good and real life for the book. If you so choose you can do that regularly on a variety of subjects to learn more about something. The luxury about having a background in classical languages is that you can decode language and derive some meaning from it. Research is about layering. You start at the surface and then go deeper, then deeper and then deeper still. Think about the hierarchy of media:
Social Media (instant) Newspapers (or daily up to the minute news) Magazines (or media taking 4-5 days to create) Content aggregators/Monthly Publications Books
So for example researching check fraud I might see:
Tweets/posts about it A newspaper article about a check fraud ring A Magazine piece about its prevalence in America A group of these items over a period of time between one month and one year A book about check fraud rings by an expert
How far you go in the hierarchy depends on how much you want to know or where that information might be located. Also, for more effective searches in the future you may wish to use full sentences (Google is getting good at that) or also learning Boolean search terms.
KnaveOfAllTrades’s idea of learning demographic & economic (GDP and its component parts) statistics of various places has occurred to me as a candidate for a useful Anki deck, so I second that.
Knowing some mathematical constants to a few significant figures can be useful. Memorizing √10 = 3.16 lets you interpret midpoints between ticks on a logarithmic scale, and √2 & √3 are the lengths of diagonals of unit squares & cubes. And knowing all three roots makes it easier to guesstimate square roots in general, using the √(ab) = (√a)(√b) result for non-negative a & b. Likewise for e.g. exp(2), exp(3), ln 2 & ln 3. The 68-95-99.7 rule should go on the list as well.
So I’m apparently a fictional spaceship now [1 2]. Also someone who’s been instructed to keep an eye on it.
If anyone hasn’t read Blindsight yet you really should
Godspeed!
Is there an existing post on people’s tendency to be confused by explanations that don’t include a smaller version of what’s being explained?
For example, confusion over the fact that “nothing touches” in quantum mechanics seems common. Instead of being satisfied by the fact that the low-level phenomena (repulsive forces and the Pauli exclusion principle) didn’t assume the high-level phenomena (intersecting surfaces), people seem to want the low-level phenomena to be an aggregate version of the high-level phenomena. Explaining something without using it is one of the best properties an explanation can have, but people are somehow unsatisfied by such explanations.
Other examples of “but explain(X) doesn’t include X!”: emotions from biology, particles from waves, computers from solid state physics, life from chemistry.
More controversial examples: free will, identity, [insert basically any other introspective mental concept here].
Examples of the opposite: any axiom/assumption of a theory, billiard balls in Newtonian mechanics, light propagating through the ether, explaining a bar magnet as an aggregation of atom-sized magnets, fluid mechanics using continuous fields instead of particles, love from “God wanted us to have love”.
Most people want the explanations (models) to make intuitive sense, though a few are satisfied with the underlying math only. And intuition is based on what we already know and feel.
The Pauli exclusion (or inclusion, if you take bosons) principle feels to me like rubbery wave-functions pushing against each other (or sticking together), even though I understand that antisymmetrization is not actually a microscopic force, and interacting electrons are not actually separate entities.
I do not think that one should lump free will and identity in the same category as basic QM, however, as we do not have nearly the degree of understanding of the cognitive processes in System 1 which produce the feeling of either.
I feel like that’s extremely misleading. The Pauli exclusion principle is not a force, and cannot cause a particle to accelerate.
So… you repeated what I wrote (“antisymmetrization is not actually a microscopic force”) and then call what I said “extremely misleading”? Have a downvote.
What the goal of having an explanation?
Do you want the explanation to change your model of the world in a way that allows you to have the right intuition about a subject matter? Do you want the explanation to allow you to make better predictions about the subject matter?
Beliefs are supposed to pay rent.
If someone without a physics background hear about quantum mechanics they are supposed to be confused. If they aren’t they would simply project their old ideas into the new theory and not really update anything on a deeper level.
I’m not aware of a published theory of emotions as an extension of biology that describes all aspects of emotions that I observe on a day-to-day basis.
Understanding hardware does need solid state physics but you also need to understand software to understand computers.
The one thing missing from that video (at least up to 4:23 when I got frustrated—and he had explicitly disclaimed talking about the Pauli Exclusion Principle before this point) which gets really to the heart of it is that the Pauli Exclusion Principle kicks in when one thing literally runs into the other—when parts of two things were trying to occupy exactly the same state. If ‘couldn’t go any further or you’d be inside the other thing, but you can’t do that’ isn’t ‘contact’ then the word has no meaning.
The interviewer is exactly right at 4:17 - he did the demonstration wrong. He should have brought them into contact. Only when he was pushing inwards and the balls were pushing back hard enough to balance—that’s when he’d say they’re in contact.
So this isn’t a great example because the proper explanation does include a smaller version of what’s being explained.
What people complaining about this usually do is link to this video (or better), but I’m not sure it’s actually helpful for people who don’t get it.
I have uploaded a collection of My Little Pony one-shots called Flashes of Insight to both FIMFiction.net and FanFiction.net. While most of the stories have no particular relevance to LessWrong, “Good Night” draws heavily on ideas I first encountered on this site, and I expect most people here will find it enjoyable. Eliezer Yudkowsky called it “chilling,” which, coming from him, I consider a very great compliment.
The stories are great!
These make me sad, but not in an objectionable way. Liked and Follow’d. Good Night seems specifically optimised to chill EY, was it your goal?
I am a bit puzzled by one aspect of Good Night, but that may be because I don’t understand the tech level that the characters are operating at. In Twilight’s place, it seems that the obvious thing to do would be to znxr n pbcl bs urefrys jvgu gur nccebcevngr oberqbz-erqhpgvba arhebzbqvsvpngvba, naq yrnir vg gb xrrc Pryrfgvn pbzcnal. Vs guvf vf cbffvoyr va gur frggvat, V qba’g frr jul guvf vfa’g n pyrne jva; fvapr Gjvyvtug rkcyvpvgyl qbrf abg jnag gung shgher sbe urefrys, fur gurerol fubhyq abg vqragvsl jvgu n ure-jub-qbrf-jnag-gung-shgher, be srne fhowrpgvir pbagvahngvba nf gung pbcl. Lbhe Gjvyvtug vf bs pbhefr serr gb abg-jnag guvf fbyhgvba, ohg vg ohtf zr gung fur qvqa’g guvax bs vg gb erwrpg vg.
Oh, good heavens no! The thought that Mr. Yudkowsky would ever read the story did not even occur to me until long after it was finished.
At the level of magitek I envisioned the characters having, your solution should definitely be possible. The realistic answer is that the prompt gave us twelve hours of prep time and one hour of writing time; I did not think of your idea during the allotted time, and if I had I would have mercilessly cut it at the planning stage so that I could fit the whole story into one hour. Even disregarding the time limit, rnpu nethzrag V unq Pryrfgvn naq Gjvyvtug qvfphff jnf n fvatyr, ovt, eryngviryl fvzcyr pbaprcg; vzzbegnyf zhfg zbqvsl fb gung gurl pna rgreanyyl ybbc, be gurl zhfg tebj, be gurl zhfg qvr. Your idea is more complex, and it doesn’t fit the theme. If you had handed me a beautifully written section which covered the whole issue in three paragraphs while I was writing, I would have had no choice but to murder it for the sake of the story as a whole.
Literary concerns aside, my Twilight would disagree with the notion that lbh pna pubbfr juvpu vafgnaprf bs lbh lbh fhowrpgviryl rkcrevrapr onfrq ba jurgure lbh vqragvsl jvgu gurz be abg.
No such accusation intended! In all honesty, my thought process was “Guvf fgbel erpncvghyngrf gur svany gevyrzzn (nf lbh fnl, ybbc/tebj/qvr) bs Pnryrz rfg Pbagreeraf, juvpu vf nyernql xabja gb cbffrff RL-puvyyvat cebcregvrf; lbh pbaqrafr vg irel rssrpgviryl, naq gura lbh unir Pryrfgvn rpub bar bs gur zber ubcrshy Sha Gurbel cbfgf jvgu ‘Vg znl jryy or gung n zber pbagebyyrq pyvzo hc gur vagryyvtrapr gerr vf cbffvoyr’; naq gura Gjvyvtug erwrpgf vg.” I just read it as very pointed, which clearly was not the intended reading.
I can’t dispute your claim about story structure; it worked!
I think you got my motivations backwards, though—I agree with your Twilight on that cite! V qba’g guvax qrpynevat “V bayl vqragvsl jvgu shgher ybggrel-jvaavat!zr” yrgf zr rkcrpg gb jnxr hc n ybggrel-jvaare. V qb, ubjrire, trarenyyl rkcrpg gb jnxr hc jvgu inyhrf ynjshyyl qrevirq sebz gubfr bs abj!zr, naq abg jvgu bccbfrq inyhrf. Guvf qbrfa’g srry yvxr n znggre bs pubvpr be gnfgr. Vs V jnagrq gb qvr (V qba’g), V jbhyqa’g bcg gb znxr na vqragvpny pbcl bs zlfrys va gur cebprff—ohg V jbhyqa’g arprffnevyl bowrpg gb znxvat n pbcl bs zlfrys zbqvsvrq gb jnag gb yvir. Vaghvgviryl, vg frrzf gung gur svefg pnfr gujnegf zl qrfverq qrngu va n jnl gung gur frpbaq pnfr qbrf abg.
Good Night was really great.
Enjoyed that, thanks. Have you read Diaspora by Egan?
I have not. All I know of it is what Eliezer quoted in CFAI.
The cookie example here is a nice explanation of the difference between frequentists and Bayesians.
So, I made two posts sharing potentionally useful heuristics from Bayesianism. So what?
Should I move one of them to Main? On the one hand, these posts “discuss core Less Wrong topics”. On the other, I’m honestly not sure that this stuff is awesome enough. But I feel like I should do something, so these things aren’t lost (I tried to do a talk about “which useful principles can be reframed in a Bayesian terms” on a Moscow meetup once, and learned that those things weren’t very easy to find using site-wide search).
Maybe we need a wiki page with a list of relevant lessons from probability theory, which can be kept up-to-date?
(decided to move everything to Main)
I might need some recalibration, but I’m not sure.
I research topics of interest in the media, and I feel frustrated, angry and annoyed about the half-truths and misleading statements that I encounter frequently. The problem is not the feelings, but whether I am ‘wrong’. I figure there are two ways that I might be wrong:
(i) Maybe I’m wrong about these half-truths and misleading statements not being necessary. Maybe authors have already considered telling the facts straight and that didn’t get the best message out.
(ii) Maybe I’m actually wrong about whether these are half-truths or really all that misleading. Maybe I am focused on questions of fact and the meanings of particular phrases that are overly subtle.
The reason why I think I might need re-calibration is because I don’t consider it likely that I am much less pragmatic, smarter or more accurate than all these writers I am critical of (some of them, inevitably, but not all of them—also these issues are not that difficult intellectually).
Here are some concrete examples, all regarding my latest interest in the Ebola outbreak:
Harvard poll: Most recently, the HSPH-SSRS poll with headlines, “Poll finds US lack knowledge about ebola” or, “Many Americans harbor unfounded fears about Ebola”. But when you look at the poll questions, they ask whether Americans are “concerned” about the risk, not what they believe the risk to be, and whether they think Ebola is spread ‘easily’. The poll didn’t appear to be about American’s knowledge of Ebola, but how they felt about the knowledge they had. The question about whether Ebola transmits easily especially irks me, since everyone knows (don’t they??) that whether something is ‘easy’ is subjective?
“Bush meat”: I’ve seen many places that people need to stop consuming bush meat in outbreak areas (for example). I don’t know that much about how Ebola is spreading through this route, but wouldn’t it be the job of the media and epidemiologists to report on the rate of transmission from eating bats (I think there has only been one ground zero patient in West Africa who potentially contracted Ebola from a bat) and weigh this with the role of local meat as an important food source (again, don’t know, media to blame)? Just telling people to stop eating would be ridiculous, hopefully it’s not so extreme. Also, what about cooking rather than drying local meat sources? This seems a very good example of the media unable to nuance a message in a reasonable way, but I allow I could be wrong.
Media reports “Ebola Continues to spread in Nigeria” when the increase in Ebola cases were at that time due to contact with the same person and had already been in quarantine. This seemed to hype up the outbreak when in fact the Nigerians were successfully containing it. Perhaps this is an example of being too particular and over-analyzing something subtle?
Ever using the phrase ‘in the air’ to describe how Ebola does or doesn’t transmit, because this is a phrase that can mean completely different things to anyone using or hearing the phrase. Ebola is not airborne but can transmit within coughing distance.
The apparent internal inconsistency of a case of Ebola might come to the US, but an outbreak cannot happen here. Some relative risk numbers would be helpful here.
All of these examples upset me to various degrees since I feel like it is evidence that people—even writers and the scientists they are quoting—are unable to think critically and message coherently about issues. How should I update my view so that I am less surprised, less argumentative or less crazy-pedantic-fringe person?
My first suggestion would be to look at the incentives of people who write for the media. Their motivations are NOT to “get the best message out”. That’s not what they’re paid for. Nowadays their principal goal is to attract eyeballs and hopefully monetize them by shoving ads into your face. The critical thing to recognize is that their goals and criteria of what constitutes a successful piece do not match your goals and your criteria of what constitutes a successful piece.
The second suggestion would be to consider that writers write for a particular audience and, I think, most of the time you will not be a member of that particular audience. Mass media doesn’t write for people like you.
Your comment is well-received. I’m continuing to to think about it and what this means for finding reliable media sources.
My impression of journalists has always been that they would be fairly idealistic about information and communicating that information to be attracted to their profession. I also imagine that their goals are constantly antagonized by the goals of their bosses, that do want to make money, and probably it is the case that the most successful sell-out or find a good trade-off that is not entirely ideal for them or the critical reader.
I’ll link this article by Michael Volkmann, a disillusioned journalist.
Unfortunately, in practice this frequently translates to “show the world how evil those blues are even if I have to bend the literal truth a little to do it.”
My feeling is that quest is misguided. There is no such thing as a pure spring which gushes only truth—you cannot find one.
My own approach is to accept that reality is fuzzy, multilayered, multidimensional, looks very different from different angles, and is almost always folded, spindled, and mutilated for the purpose of producing a coherent and attractive story. Read lots of different (but, hopefully, smart and well-informed) sources which disagree with each other. Together they will weave a rich tapestry which might not coalesce into a simple picture but will be more “true”, in a way, than a straight narrative.
Having said this, I should point out that adding pretty clear lies to the mix is not useful and there are enough sources sufficiently tainted to just ignore.
The link is making a different argument—it says the problem isn’t with the journalists or with their bosses, it’s that the public isn’t paying attention to the stories journalists are risking their necks to get.
True. I linked the article as an example of the idealistic journalist, one that is disappointed that his motives are distrusted by the public.
That’s a funny sentence. You yourself blame scientists with whom you didn’t interact at all based on the way they got quoted without critically asking yourself whether your behavior makes sense.
If a journalist quotes a scientist the process might be: Journalists picks up the phone and calls the scientists. They talk 15 minutes about the issue. Then the journalist who thinks that it’s his job to quote an authority picks one sentence of that interview that fits into the narrative the journalist wants to tell. It’s quite possible that the scientists even didn’t say that sentence “word for word”.
It’s also quite possible that you spend more time investigating the issue in detail then some of the journalists you read.
My limited experience with journalists supports this—when they speak with you, they often already have the outline of the story ready (the nearest existing cliche); they only need a few words they can take out of context and used them to support their bottom line. You can try to educate them, but they don’t really listen to you to learn about the topic, they listen to catch some nice keywords.
I recently decided to bite the bullet and started to use the Markdown standard in my plain-text documents (I would have preferred the syntax of txt2tags or Org-mode, but neither of those is nearly as widespread and well-supported). It’s proven so useful that I am seriously considering uninstalling LibreOffice. Who needs a WYSIWYG editor when you have readable source code which can be easily converted to an html document? Not to mention that Notepad++ opens instantly, while LibreOffice Writer takes forever.
I highly recommend that anyone who deals with lots of text documents try Markdown. It will change your digital life. If you need help getting started, try the Markdown Tutorial.
Many people. Text editors and word processors are different tools serving different needs.
Try auto-generating a table of contents in your Markdown document or inserting a table with live formulas (you can, of course, use external tools to achieve any functionality...).
On a similar note, if the documents you are writing will end up printed, or in PDF format, I recommend TeX. It’s significantly more complex than Markdown, but also far more powerful. And absolutely irreplaceable if you are writing something Math/Formula heavy.
I also make a point of editing all my documents in a text editor, and then compiling them. Seconding Jaime’s recommendation.
For LessWrong meetup organizers: Do you bring in new long term members who are already the stereotypical STEM/intellectual/utilitarian/etc. type? Or do you attract a significant number of people who don’t meet that description but nonetheless do become long-term members?
Vast majority are CS/Math specifically IME.
This has implications for community-building. It means that LW groups, instead of trying to find members from a wide base, should be trying to find the people in the population who share the LessWronger mentality.
Agreed, GiveWell has mentioned this WRT EA community building. That there seems to be a type EA appeals to and reaching people who are predisposed to be sympathetic to EA memes is a better use of time than trying to convince people who don’t much care.
Has anyone tried doing EA outreach to Unitarians?
In short, one person tried once, and a few other people have thought about it.
Here are some notes on the issue:
http://everydayutilitarian.com/essays/conversation-with-michael-bitton-about-ea-marketing/
https://impact.hackpad.com/Projects-Update-Meeting-22-June-2014-cIPyB1TBIkJ
https://impact.hackpad.com/Projects-Update-Meeting-3-Aug-2014-H5Ha5mejJxb
The Fate of Galt’s Gulch Chile, an experimental Objectivist community. Post is by a buyer.
A story of Yet Another Real Estate Swindle..?
When your delusion runs deep enough, the actual process of joined-up thinking itself is literally your enemy.
That’s a very very uncharitable interpretation of that sentence in the body of the bill.
I really don’t think so. There’s a pattern of this with creationists, c.f. Paul Broun condemning embryology as (literally) the work of Satan—which sounds truly weird unless you know how much e.g. Dawkins hammered on embryology as slam-dunk proof of evolution in The Greatest Show On Earth. This is another in a long series of bills attempting to get creationism a foothold in publicly-funded education, even if it has to be written entirely in dogwhistles. It may seem uncharitable in the evidence given (a single link), but not if you know the history of this sort of attempted legislative end-run.
I don’t know (and don’t care much) whether that was a dogwhistle. The claim was that “the actual process of joined-up thinking itself is literally your enemy” and your link doesn’t come anywhere close to supporting it. You’re just looking for ammo in the culture wars not even caring whether it looks suitable or not.
I just started a tumblr (coffeespoonsposts) - which tumblrs should I follow?
Here is the newest version of the rationalist masterlist I know of, thought it’s still a few months out of date. Also people who follow you (looks like we are following each other now, yay!). Also it can be fun to follow blogs for fandoms or things you think are cute, or whatever random things you are interested in.
I’ll try to update it before Sunday. Tumblr made spaghetti-code out of the html version of the list making updating it more laborious than it should be. It’ll take some time to sort out, but I’ll solve the problem by saving a neat version on my laptop.
Sounds good. Guess I should request to be on it before then!
waves hello thanks for the masterlist and the follow :)
The one blog I can really recommend is scientiststhesis. Otherwise I’d suggest going over the Masterlist and see what strikes your interest. The one thing about tumblr is that good, interesting content is often on the same blogs as Doctor Who gifs and “OMG THAT’S SO CUTE!” The signal-to-noise ratio is rather low.
I think there was a masterpost of tumblr rationalists at some point- ask ozy about it, maybe? Besides that, it depends on your other interests.
Are the European meetups in English or the native language? I’m moving to Germany soon, and would love to attend some closer meetups (Germany/Netherlands/Belgium), iff they are English by default.
The policy in the Finnish meetups has been “Finnish if everyone is Finnish, English if there are foreigners present”. I would expect the meetups in the other countries to be similar.
In France it’s French if everybody speaks French, English otherwise (or sometimes a little bit of German, there are sometimes more German speakers than French speakers).
The Brussels meetup is typically in English.
And is also quite close to my new location. Thanks!
The Hamburg Meetup is mostly in English (at least if is asked for). And we would be happy to have new participants. Couchsurfing at my home is also an option.
Patrick McKenzie explains a bit more why he hates bitcoin:
> But Patrick, isnt Bitcoin a great platform for remittances? No, its a terrible platform for remittances because 98% of the problem of remittances is what is called in networks the Last Quarter Mile Problem and Bitcoin has no infrastructure for solving it on either end of the remittance and, even if they did, would not find themselves cost-competitive with Western Union. (The part between the last quarter miles being close-to-free doesnt help. Western Union can transfer money internally for close-to-free. The supermajority of their costs is maintaining an office which someone can go to in abuelitas village. Seriously, check their annual report.)
The above does not apply if you want to acquire nootropics from a questionable source overseas.
Most of Patrick’s arguments against Bitcoin are actually against offering Bitcoin exclusively rather than against offering Bitcoin as an option, probably with an added fee.
I think “hate” is too strong a word. There are a lot of things for which I find no use but that I don’t hate.
I agree that “hate” isn’t doing much here besides giving emotional valence to empiricals, but this
sounds a little stronger to me than “finds no use for”.
Stanovich draws an interesting distinction between intelligence and rationality, where intelligence, as measured by IQ test, is so to say the strength of an individual’s analytical abilities, whereas rationality is this individual’s tendency to use these analytical abilities (as opposed to fast and unreliable Systems 1 processes); i.e., his or her tendency to “overcome his or her biases”. According to Stanovich, there are large individual differences not only regarding IQ but also regarding rationality, and he is now in the process of constructing a test measuring people’s rationality quotient, RQ. Now my question is this: in which areas do you think that a higher RQ people have comparative advantages, and in which areas do you think that high IQ people have comparative advantages? My hunch is that IQ pays off better in precise fields like mathematics, physics and computer science, where the problems are often so hard that most people can’t solve them even if they overcome their biases and use their System 2, whereas high RQ pays off better in more ill-structured fields like qualitative sociology, where any individual line of reasoning usually is fairly simple, and therefore does not require a very high IQ, but where it is easy to fall prey to (politically) motivated reasoning, confirmation bias, and all sorts of other biases.
Hence in order to arrive at true theories, it seems to me that you need a high RQ in the social sciences. On the other hand, in order to sell your theories, RQ is not necessarily always helpful: on the contrary, a fair dose of overconfidence bias can be useful here. Many bigshot social scientists during the last century or so were anything but rational (Foucault and Freud are two of many examples), but were able to convince other (equally biased people) that they were.
As fields become more exact (as for instance psychology gradually have become), you gradually need a higher and higher IQ to compete: rationality is no longer enough. My guess is that as more and more fields grow more exact, moderate IQ people will be of less and less use in the academia.
I understand that bashing Freud is a popular way to signal “rationality”—more precisely, to signal loyalty to the STEM tribe which is so much higher status than the social sciences tribe—but it really irritates me because I would bet that most people doing this are merely repeating what they heard from others, building their model completely on other people’s strawmans.
Mostly, it feels to me horribly unfair towards Freud as a person, to use him as a textbook example of irrationality. Compared with the science we have today, of course his models (based on armchair reasoning after observing some fuzzy psychological phenomena) are horribly outdated and often plainly wrong. So throw those models away and replace them by better models whenever possible; just like we do in any science! I mostly object to the connotation that Freud was less rational compared with other people living in the same era, working in the same field. Because it seems to me he was actually highly above the average; it’s just that the whole field was completely diseased, and he wasn’t rational enough to overcome all of that single-handedly. I repeat, this is not a defense of factual correctness of Freud’s theories, but a defense of Freud’s rationality as a person.
To put things in context, to show how diseased psychology was in Freud’s era, let me just say that the most famous Freud’s student and then competitor, Carl Gustav Jung, rejected much of Freud’s teachings and replaced them with astrology / religion / magic, and this was considered by many people an improvement compared with the horribly offensive ideas that people could be predictably irrational, motivated by sexual desires, and generally frustrated with the modern society based on farmers’ values. (Then there was also the completely different school of Vulcan psychologists who said: Thoughts and emotions cannot be measured, therefore they don’t exist, and anyone who says otherwise is unscientific.) This was the environment which started the “Freud is stupid” meme, which keeps replicating on LW today.
I think the bad PR comes from combination of two facts: 1) some of Freud’s ideas were wrong, and 2) all of his ideas were controversial, including those which were correct. So, first we have this “Freud is stupid” meme most people agree with, however, mostly for wrong reasons. Then, the society gradually changes, and those Freud’s ideas which happened to be correct become common sense and are no longer attributed to him; they are further developed by other people whom we remember as their authors. Only the wrong ideas are remembered as his legacy. (By the way, I am not saying that Freud invented all those correct ideas. Just that popularizing them in his era was a part of what made him controversial; what made the “Freud is stupid” meme so popular. Which is why I consider that meme very unfair.) So today we associate human irrationality with Dan Ariely, human sexuality with Matt Ridley, and Sigmund Freud only reminds us of lying on a couch debating which object in a dream represented a penis, and underestimating an importance of clitoris in female sexuality.
As someone who has actually read a few Freud’s books long ago (before reading books by Ariely, Ridley, etc.), here are a few things that impressed me. Things that someone got right hundred years ago, when “it’s obviously magic” and “no, thoughts and emotions actually don’t exist” were the alternative famous models of human psychology.
(continued in next comment...)
(...continued)
The general ability of updating. At the beginning of Freud’s career, the state-of-art psychotherapy was hypnosis, which was called “magnetism”. Some scientists have discovered that the laws of nature are universal, and some other scientists have jumped to the seemingly obvious conclusion that analogically, all kinds of psychological forces among humans must be the same as the forces which makes magnets attract or repel each other. So Freud learned hyphosis, used it in therapy, and was enthusiastic about it. But later he noticed that it had some negative side effects (female patients frequently falling in love with their doctors, returning to their original symptoms when the love was not reciprocated), and that the positive side effects could also be achieved without hypnosis, simply by talking about the subject (assuming that some conditions were met, such as the patient actually focusing on the subject instead of focusing on their interaction with the doctor; a large part of psychoanalysis is about optimizing for these conditions). The old technique was thrown away because the new one provided better results. Not exactly the “evidence based medicine” by our current standards, but perhaps we could use as a control group all those doctors who stubbornly refused to wash their hands between doing autopsy and treating their patients, despite their patients dropping like flies. -- Later, Freud replaced his original model of unconscious, preconscious and conscious mind, and replaced it with the “id, ego, superego” model. (This is provided as an evidence of the ability to update, to discard both commonly accepted models and one’s own previous models. Which we consider an important part of rationality.)
Speaking about the “id, ego, superego” model, here is the idea of a human brain not being a single agent, but composed of multiple modules, sometimes opposed to each other. Is this something worth considering for Less Wrong readers, either as a theoretical step towards reduction of consciousness, or as a practical tool for e.g. overcoming akrasia? “Ego” as the rational part of the brain, which can evaluate consequences, but often doesn’t have enough power to enforce its decisions without emotional support from some other part of brain. “Id” as the emotional part which does not understand the concept of time. “Superego” as a small model of other people in our brain. Today we could probably locate the parts of the physical brain they correspond to.
“The Psychopathology of Everyday Life” is a book describing how seemingly random human errors (random movements, forgetting words, slips of the tongue) sometimes actually make sense if we perceive them as goal-oriented actions of some mental subagent. The biggest problem of the book is that it is heavy with theory, and a large part of it focuses on puns in German language… but remove all of this, don’t mention the origin, and you could get a highly upvoted article on Less Wrong! (The important part would be not to give any credit to Freud, and merely present it as an evidence for some LW wisdom. Then no one will doubt your rationality.) -- On the other hand, “Civilization and Its Discontents” is a perfect book to be rewritten into a series of articles on Overcoming Bias, about a conflict between forager mentality and farmer social values.
But updating and modelling human brains, those are topics interesting for Less Wrong readers. Most people would focus on, you know, sex. Well, how exactly could we doubt the importance of sexual impulses in a society where displaying a pretty lady is advertising 101, Twilight is a popular book, and internet is full of porn? (Also, scientists accept the importance of sexual selection in evolution.) Our own society is a huge demonstration that Freud was right about the most controversial part of his theory. The only way to make him wrong about this is to create a strawman and claim that according to Freud everything was about sex, so if we find a single thing that isn’t, we proved him wrong. -- But that strawman was already used in Freud’s era; he actually started one of his books by disproving it. Too bad I don’t remember which one. One of the case histories, probably. (It starts like: So, people keep simplifying my theories that all dreams are dogmatically about sex, so here is a simple example to correct the misunderstanding. And he describes a situation where some child wanted an ice cream, parents forbid it, and the child was unhappy and cried. That night, the child had a dream about travelling to North Pole, through mountains of snow. This, says Freud, is what resolving a suppressed desire in a dream typically looks like: The child wanted the ice cream, that’s desire #1, but also the child wanted to avoid conflict with their parents, that’s desire #2. How to satisfy both of them? The “mountains of show” obviously symbolize the ice cream; the child wants it, and gets it, a lot! But to avoid a conflict with parents, even in the dream, the ice cream is censored and becomes snow, so the child can plausibly deny to themselves disobeying their parents. This is Freud’s model of human dreams. It’s just that an adult person would probably not obsess so much about an ice cream, which they can buy if they really want it so much, but about something unavailable, such as a sexy neighbor; and also a smart adult would use more complex censorship to fool themselves.) Also, he had a whole book called “Beyond the Pleasure Principle” where he argues that some mind modules may be guided by principles other than pleasure, for example nightmares, repetition compulsion, aggression. (His explanation of this other principle is rather poor: he invents a mystical death principle opposing the pleasure principle. Anyway, it’s evidence against the “everything is about sex” strawman.)
Freud was an atheist, and very public about it. He essentially described religion as a collective mental disease, in a book called “The Future of an Illusion”. He used and recommended using cocaine… if he lived in the Bay Area today, and used modafinil instead, I can easily imagine him being a very popular Less Wrong member. -- But instead he lived a century ago, so he could only be one of those people spreading controversial ideas which are now considered obvious in hindsight.
lt;dr—I strongly disagree with using Freud as a textbook example of insanity. Many of his once controversial ideas are so obvious to us now that we simply don’t attribute them to him. Instead we just associate him with the few things he got wrong. And the whole meme was started by people who were even more wrong.
Hi Viliam,
thanks for your interesting and thoughtful response. Possibly I should have used another example. There are other, more clearcut cases in e.g. the postmodernist tradition, but I wanted someone more well-known.
The reason I chose him was not to signal loyalty to the STEM tribe, but rather because he is taken to be a textbook example of irrationality by Popper and Gellner, two of my favourite philosophers. Popper claimed that Freud’s theories were unfalsifiable and that for any possible event E, both E and not-E was standardly taken to confirm his theories. This is inconsistent with probability theory, as pointed out in “Conservation of Expected Evidence” (which is a very Popperian post). The reason Freud and his followers (I think that some people have thought that some of his followers were actually worse on this point than Freud) did this mistake (if they did) presumably was confirmation bias (falsificationism can be seen as a tool to counter confirmation bias).
There is a huge literature on whether this claim is actually true. I have read Freud and Gellner’s (to my mind very interesting) book on psycho-analysis, as well as some of Popper’s texts on the topic, so I’m not merely repeating ideas I’ve heard from others. That said, I don’t know the subject well enough to go into a detailed discussion of your claims. Also, it’s sort of tangential to the topic. My point was not to bash Freud—that was so to say a side-effect of my claim.
Regarding your historical claims, I think that it’s very hard to establish who introduced nebolous ideas such as Freud’s tripartite model of the mind. Some claim that Plato’s theory of the mind foreshadowed it. Gellner claims that all good original ideas in Freud are taken from Nietzsche. I don’t know enough of the topic to determine whether any of these claims are true, but in order to establish whether they are, or whether Freud really was as significant and original as you claim, one would need to take a deep plunge into the history of ideas.
For the record, that long comment was not completely directed to you; it was something I have already thought should be written, and reading your comment was simply the moment when my inaction changed to action.
People are full of biases and rationalizations, and if you give them a theory which says “actually, other people often don’t even know what happens in their own minds”, well, that can hurt them regardless of whether the theory is true. And yes, this is what most amateur “psychologists” do after seeing “psychoanalysis” done on TV and learning the relevant keywords. And I guess not a few professional psychologists are not better than this. And yes, it made it difficult to argue against Freud in cases he was wrong.
Still, as I wrote, he was capable of changing his mind. And other psychoanalysts later disagreed on some topics. But without proper scientific method we can’t be sure that these changes really were improvements, as opposed to random drift (“I am a high-status psychoanalyst, so I will signal it by adding my random opinion to our set of sacred beliefs”).
Some parts of psychoanalysis make predictions; the problem is that unlike in physics, humans can react in many different ways. It’s like a black-box testing where each “box” is internally wired differently. We do have a prediction that a dream will contain a censored version of a suppressed desire. And it feels like it should be testable. But how specifically will the desire be censored? Uhm… this depends on the specific person, on what associations they have, so again we can suspect than any result could be “explained” as some form of censorship of something.
According to wikipedia Popper compared Freud with Einstein, as two people living in the same era, whose scientific rigor was completely different. Yeah, there was a huge difference. There was also a huge difference in the amount and quality of data they had, the available tools, the complexity of the studied objects, and the general waterline of sanity in their fields. (Again, “it’s magic” and “people actually don’t think” were the respected alternative theories. Imagine starting in a similar position in physics.)
Like I said, there is a huge discussion on this issue in the philosophy of science. My guess is that most of your arguments above have already been discussed extensively.
Grünbaum’s book is considered a classic on the subject and might be a place to start (I haven’t read it, though Gellner refers a lot to it). He is critical of psycho-analysis but rejects Popper’s view of it as a pseudo-science.
By how many standard deviations of the general public would you predict analytical philosophers or physicists outperform academic postmodernists once Stanovich test is ready?
Oh I don’t know. I think I’ve met some pretty irrational analytical philosophers too, actually. But I would expect the difference to be substantial, yes. Did you read about the Sokal affair? It says something of the level of irrationality and intellectual irresponsibility.
Irresponsibility is something very different than irrationality.
Do you judge postmodernists because their tribe does things that you don’t like or do you judge them because you think the average postmodernist would score less on a proper Rationality Quotient test than members of other tribes?
If you really think that they would score less on a Rationality Quotient test it should be possible for you to make predictions about the effect size in numbers. You are free to set your error bars as wide as you wish or chose another tribe to compare than analytical philosophers if you think there’s a better comparison.
Right, finding a single anecdote where members of a tribe that you don’t like failed is a rational way to assess the general rationality of the average member of that tribe.
I don’t even know how the test is constructed, so it would be downright silly of me to try to come up with predictions in terms of numbers.
Sarcasm does not further a constructive debate. Also, I think your way of arguing is generally too nit-picky and uncharitable. I wasn’t trying to argue against you or anything; I just wanted to give you a tip.
Sokal actually wrote a book with Jean Bricmont indicating that this was far from an isolated anecdote. Also my judgement from having (had to) read quite a bit of postmodernist crap is that Sokal is spot on.
No, the fact that you have some uncertainty about the test just indicates that your should choose a larger confidence interval than when you would know details of the test. It shouldn’t stop you from being able to produce a confidence interval.
I don’t have any issue with people arguing with me. I’m more likely having an issue with people who assume that I’m ignorant of the subject I’m talking about. Not knowing about Sokal would be a case of ignorance. But that’s still not a major issue.
Tribalism is a huge failure condition. I don’t think it’s helpful to pretend that it isn’t. Practicing charity in the same of assuming that the people with whom one argues are immune to effects like tribalism is not conductive to truth finding.
You yourself wrote a post about identifying patterns of bad reasoning. You won’t get very far with that project if you discuss with social norms that forbid people from pointing out those patterns.
The irony of you criticising Freud for not making falsifiable practicings while being unwilling to make concrete numeric falsifiable predictions about the supposed irrationality of postmodernists is to central to ignore it out of a desire for politeness.
Part of science is that you are not charitable about predictions and interpret those predictions as true regardless of what data you find. That’s especially important when you say negative things about an outgroup that you don’t like. It’s a topic where you have to be extra careful to follow principles of proper reasoning.
This might seem to you as nit-picky but it’s very far from it. You don’t make a discourse more rational by analysing it in a dissociative way if you don’t actually apply your tools for bias detection.
The whole issue with the Sokal episode was that the journals editors where very charitable to Sokal and therefore published his paper.
The fact that you have some uncertainty about the test also has some implications about the distribution of possible results. If a group is 10% less rational than another and that 10% is due to a characteristic that makes those group members systematically worse than the comparison group, you can measure a lot of group members and confirm that you get measurements that average 10% less.
If a group is 20% less rational than another group but there’s a 50% chance the test detects the difference and a 50% chance it doesn’t, that can also be described as you expecting results showing the group is 10% less rational. But unlike in the first case, you can’t take a lot of measurements and get a result that averages out to 10% less. You’ll either get a lot of results that average 20% less or a lot of results that aren’t less at all, depending on whether the test detects or doesn’t detect it.
And in the second case, the answer to “can I use the test to make predictions” is “no”. If you’re uncertain about the test, you can’t use it to make predictions, because you will be predicting the average of many samples (in order to reduce variation), and if you are uncertain about the test, averaging many samples doesn’t reduce variation.
Rationality is not a binary variable, but continuous. It is NOT the case that the test has a chance of detecting something or nothing: the test will output a value on some scale. If the test is not powerful enough to detect the difference, it will show up as the difference being not statistically significant—the difference will be swamped by noise, but not just fully appear or fully disappear in any given instance.
Nope—that would only be true if rationality were a boolean variable. It is not.
That doesn’t follow. For instance, imagine that one group is irrational because their brains freeze up at any problem that contains the number 8, and some tests contain the number 8 and some don’t. They’ll fail the former tests, but be indistinguishable from the first group on the latter tests.
I can imagine a lot of things that have no relationship to reality.
In any case, you were talking about a test that has a 50% chance of detecting the difference, presumably returning either 0% or 20% but never 10%. Your example does not address this case—it’s about different tests producing different results.
You were responding to Stefan. As such, it doesn’t matter whether you can imagine a test that works that way; it matters whether his uncertainty over whether the test works includes the possibility of it working that way.
If you don’t actually know that they freeze up at the sight of the number 8, and you are 50% likely to produce a test that contains the number 8, then the test has a 50% chance of working, by your own reasoning—actually, it has a 0% or 100% chance of working, but since you are uncertain about whether it works, you can fold the uncertainty into your estimate of how good the test is and claim 50%.
Keep in mind the editors of Social Text did not believe Sokal’s article was actually sound philosophy. Not understanding it, they preferred to give it the benefit of the doubt. The only thing that Sokal was able to trick them into believing was that the article was intended to be sound philosophy.
That’s like excusing oneself from causing a car crash on the grounds of being drunk.
In what way? Who was injured?
They are both pleading incompetence as an excuse for failure.
We only know that’s what they said afterwards.
By the same argument, we only know it was intended to be a hoax because Sokal said so afterward....
Sokal is a physicist, and a publication like this would have been a major embarassment inside his field. So he had no choice not to disclose the hoax before anyone else (who maybe didn’t get the joke) would have commented.
There’s an anecdote near the beginning of “introduction to psychoanalysis” where he discusses the dreams of arctic explorers, which are almost entirely about food, not about sex, for understandable reasons.
Freud’s theory was supposed to be a theory of the human mind, thus it should apply to humans in every human society. So why are you focusing on one society in particular (specifically one that was heavily shaped by people who believed Freud’s theories) as your demonstration that Freud was correct?
Edit: Could you state the controversial theory of Freud’s that you claim has been demonstrated. Surely you don’t mean his entire theory of psychosexual development.
No. That theory is a textbook example of burdensome details. (Also, typical family fallacy.) I can imagine that having a problem at age X—which in given culture is associated with doing Y—could visibly increase the probability of having a psychological symptom Z in adult age. But that theory just gives too much details for something that at best would be a wide probabilistic distribution of outcomes.
Mind composed of multiple agents; people often motivated by sex even when they deny it; human mind not well adapted to civilization; religion as institutionalized neurosis.
They don’t seem controversial anymore. (Okay, the last one does to many people.)
So Freud was correct if you ignore the details of what he said and steelman the hell out of what he “meant”.
The idea of the mind being composed of multiple components has been around for all of recorded history. Granted it wasn’t phrased as multiple “agents”, but Freud didn’t phrase it that way either.
Yes, people sometimes deny their true motivations. However, the specific claim that these secret motivation is almost always sexual is still not clear today, and probably false.
If this is meant to refer to his theory of psychological repression. It’s become clear that he’s way of stating that wasn’t a good idea. Certainly worse that the traditional way of stating that, namely that children need to be taught to like good things and dislike bad things.
Well, the attempts at creating states without this neurosis created even more neurotic states, but I suppose you already knew that.
I dispute that. There is evidence that some cultures had concepts of multiple souls; the Ancient Egyptians and Inuit come to mind. But Greek and post-Greek philosophy and the Abrahamic religions firmly established the idea that humans have a single indivisible (“monadic”) soul in all the cultures they pervaded, and that very much includes 19th century Vienna.
So you might say components models of the mind existed, but they certainly weren’t “around”. Freud might have heard of the Ancient Egyptian concept of the soul but it certainly wasn’t something a mainstream scientist could have referred to to credibilitize his theory.
Which is why one of the mot commonly read Platonic dialogues, The Republic had a famous treatment of the psyche as being three parts with not a little resemblance to the id/ego/superego, and his student Aristotle has a hierarchy of faculties?
BTW, FWIW IIRC Dante Alighieri in the Divine Comedy claimed that the soul was indivisible and pointed to inattentional blindness as evidence for that.
Regardless of whether Freud’s ideas were correct or not, what about his methods ? How did he come by his ideas ? Were his hypotheses even falsifiable, and if so, did he attempt to rigorously falsify them ?
If the answer is “no”, then, while I will grant you that Freud was possibly relatively more rational than his colleagues at the time, it would still be quite a stretch to call him a rationalist in the absolute terms.
The answer is “no”. However, compare with Darwin. His method was also “observing and creating models that fit observations”. (He also got some things wrong: AFAIK he assumed that all traits are continuously divisible; genes were discovered by Mendel later. But generally, his success ratio was much better. But also his field was much saner.)
Also, Freud did some kind of experiments. He was not merely a philosopher, he also cured people, and it seemed to him that his theories work. But he didn’t have a control group, etc.
I could be wrong, but didn’t Darwin actually formulate some hypotheses, and then go out there and count finches (and other things) to see if his predictions were true ? I think that’s why his success rate was so much better (though, admittedly, not perfect): he conducted experiments in the real world, using real math.
How did he know if his theories actually worked, then ? Was he even making his patients better in any way (as compared to other patients who saw other doctors, or perhaps no doctors at all) ?
He was convinced that “couch therapy” worked better than hypnosis, but I don’t know whether he kept records to prove it.
(Sorry, I have read all this decades ago, and then I was interested in his models of mind, not in technical details. Now I know that those details are critical, but I don’t remember whether I read about them or not.)
You italicize words with asterisks, like this:
*methods*
. There is a “Show help” button below the comment box, on the right.Sorry, it gets difficult to keep all the commenting systems straight after a while.
Thanks for your long and insightful comment. I think it should be edited and put as a top-level article. It’s something that I’d personally love to link my friends to everytime they start strawmanning Freud.
Thank you! I was considering this option, but as a LW article it would deserve better research, citations, etc. Maybe later, in unspecified future.
Who do you mean?
“No, this is horrible; decent people don’t have dirty thoughts! You are completely ignoring the supernatural aspects of the human soul” kind of people.
This is a completely inaccurate depiction of Psychology as it existed during Freud’s time. You list Jung, one of Freud’s victims, as the only example of a “rival.” I think perhaps this is standard continental Euro-Chauvinism. Could it be that you are really unaware of Francis Galton’s development of psychometrics or William James’ monumental Principles of Psychology? James is a good example of someone who was a predecessor/contemporary of Freud who studied the same topic but did not go utterly off the rails into Crazy Land the way Freud did. He took a naturalistic view of the human mind, drawing upon introspection and empiricism. Galton’s contributions were vast and showed actual mathematical rigor.
Freud’s biggest contribution was probably his attempt to invent Psychopharmocology. (The short-term outcome was getting a lot of unfortunate people addicted to cocaine, but the basic idea had merit.) As for his theory of the human mind, it is worthless and set Psychology back by decades.
Sadly Freudian Psychoanalysis is Religion and Big Business now, and still practiced heavily in Mitteleuropa and parts of South America.
James was impressive. Galton… did something a bit different; also impressively.
With regard to Galton and psychometrics in general: That’s another problem with psychology, that it is a wide field, and somehow (at least in the past) the things that were interesting were difficult to measure, and the things that were easy to measure were not interesting to most people. This is why there were so many different schools in psychology: they often didn’t strictly contradict each other, it was more like everyone discussed something else—and yeah, sometimes they made huge generalization based on the part they studied.
Imagine that you go to a doctor and say: “I feel unhappy, sometimes I have problems to sleep at night, and I don’t know why but I noticed myself behaving irationally towards my girlfriend lately. Can you help me, doc?” And the doctor says: “You know, I specialize at something else. I shine people flashlight into their eyes, and measure how many milliseconds it takes them to blink. I have a lot of data, serious statistics and stuff. I can measure how fast you blink, and tell you whether you are a slow-blinker or a fast-blinker with p < 0.0001. Certified 100% pure science.”
That’s not doing the same thing better; that’s doing a different thing. Yes, he is a good scientist, but he didn’t answer your question, and he can’t cure you. Fifty years later, someone may build a therapy by gradually expanding his research, though.
At the risk of outing myself as a smug STEM tribalist...my view of Freud is pretty dim, a big reason for which is that secondary sources (e.g.), citing specific details, argue that Freud exaggerated the robustness of his theories, failed to keep basic factual details straight, and even fabricated observations outright.
Admittedly, I haven’t read Freud himself (I’m one of the people “merely repeating what they heard from others”), so the charges levelled at him might be groundless, but they seem plausible & well-substantiated. And once substantiated, a pattern of self-aggrandization, sloppiness, and fabrication seems to me fair grounds for calling Freud (epistemically) irrational, even though some of his ideas turned out to be true.
It isn’t clear to me that we would have anything resembling the approaches to a scientific concept of consciousness that we have today, were it not for passing through something like psychoanalysis on the way. Freud’s contributions would have been replaceable, of course, had he not been there — just as Galileo’s would be. But condemning Freud seems like condemning Newton, who likewise had plenty of wrong ideas.
The cult of Freud is unfortunate, but not particularly relevant to his contribution.
The difference is that Newton also had plenty of right ideas.
What did you think of the four Freudian ideas Viliam_Bur suggested to you?
Edit, September 2: well, now I guess I know.
This brings us to a couple of additional reasons why Freud-bashing, or tarring Freud as irrational, could be unfair. Maybe he was a necessary step along the road to scientific psychology. Maybe it’s an unfair double standard to bash Freud while ignoring the wrongnesses of (e.g.) Galileo & Newton.
I personally disagree with the first reason. True or not, I don’t see it as justifying the wilful shoddiness from which a big chunk of Freud’s work apparently suffers. I see no reason why a counterfactual Freud couldn’t have come up with basically the same ideas without engaging in PR campaigns, unforced errors, and lies.
I’m more open to the second argument, but I’d want evidence that Galileo & Newton not only had “plenty of wrong ideas”, but tried to further those wrong ideas by bullshitting & fabricating as much as Freud did to further his wrong ideas. Otherwise there’s no real double standard.
As it happens, both Galileo & Newton have been accused of scientific misconduct. I don’t really know the details about Galileo’s case, but I know some for the case against Newton. In short, Newton used fudge factors to shift various estimates of physical quantities in his Principia. However, reading the rap sheet more closely, it sounds like Newton was quite explicit about making his adjustments, in which case he wasn’t engaging in misconduct. I’d guess there’s some similar subtlety in Galileo’s case which people miss, but as ever I could be wrong.
There may be other similarly famous scientists who were crowned geniuses and really did use misconduct to defend substantially wrong beliefs. Mendel, Kepler, Ptolemy, Pasteur, Robert Millikan, and even Einstein are promising candidates, having all been accused of scientific misconduct.
I know little about the Einstein, Kepler, or Ptolemy accusations. As for the others, my lay understanding is that scientists still argue over whether the close match between Mendel’s data and Mendel’s theory is suspicious (and indeed whether it can be explained by unconscious bias rather than conscious fiddling); that Pasteur suppressed the results of experiments which seemed to contradict germ theory, and didn’t cooperate with other scientists who wanted to run such experiments; and that Millikan lied about excluding his least plausible data points in his reports on his oil-drop experiments. So Pasteur & Millikan both lied about which results they were presenting, but the results themselves were all genuine, and both researchers were defending theories which were basically correct, not incorrect. Mendel, meanwhile, may not have committed misconduct at all! So I’ve yet to find a true parallel to Freud in the STEM pantheon.
[Edited 31⁄08 to fix “Einsten”.]
Here’s a series of articles about the history and background of the Galileo controversy
http://tofspot.blogspot.co.uk/2013/08/the-great-ptolemaic-smackdown.html
tl;dr Galileo went beyond the data he had to justify the Copernican model—his argument about tides was incorrect (he neglected the role of the moon) and his argument via the motion of sunspots was explicable within the Tychonic model.
Politically, he had just about the best hand dealt to him from the start and proceeded to play it stupidly. He had many close friends in the Church (including the Pope himself!) but his bullishness and lack of tact led him to alienate them one by one. By the standards of the time he got off with a slap on the wrist.
Of course, none of this is to say that his opponents didn’t do and say similarly stupid things, but it wasn’t a simple Brave Rational Iconoclast David vs Decrepit Reactionary Goliath Institution narrative.
Thanks for the summary. In itself that doesn’t sound much like misconduct, as it’s quite possible to go beyond the data and make incorrect/superfluous arguments without being negligent or deceptive.
(I could read the series you link, plus its references, to try to discern whether negligence or deception actually was involved, but after flicking through the first three parts — 14,000 words or so — and not spotting big smoking guns, I put the remaining posts on my mental when-I-get-round-to-it-on-a-rainy-day list.)
There is a bigger problem. People’s system 2 can be even more unreliable that their system 1. System 2 uses whatever one consciously believes, which can very well include large amounts of falsehoods and anti-epistemology, e.g., believing the theories of Foucault and Freud. In fact, system 1 frequently saves people from their system 2′s irrational beliefs.
The wikipedia article on hyperbolic discounting suggests that hyperbolic discounting could be a winning move under certain types of uncertainty about the rate at which the expected reward will “disappear” before you collect it.
They have a nice mathy example, but intuitively, the longer a reward is observed to stick around, the more confident you should be that the reward will stick actually around for the remaining offset period. Hence the observed preference rehearsal.
The Smoking Lesion problem is
“Susan is debating whether or not to smoke. She knows that smoking is strongly correlated with lung cancer, but only because there is a common cause – a condition that tends to cause both smoking and cancer. Once we fix the presence or absence of this condition, there is no additional correlation between smoking and cancer. Susan prefers smoking without cancer to not smoking without cancer, and prefers smoking with cancer to not smoking with cancer. Should Susan smoke? Is seems clear that she should.”
But now assume that Susan suffers from painful anxiety proportional to her Bayesian estimate of the probability of her getting lung cancer. This anxiety plays a bigger role in her utility function than any enjoyment she might get from smoking. Should she still smoke?
Susan will have less anxiety if she doesn’t smoke, so doesn’t this mean she shouldn’t smoke? But when Susan is making the decision about smoking couldn’t she say to herself “whether I smoke will have no effect on the probability of my getting lung cancer, and since my brain makes a rational estimate of the probability of my getting lung cancer when deciding how much anxiety to dump on me, whether I smoke shouldn’t impact my level of anxiety, so I should smoke since I enjoy it? Clearly, if Susan flipped a coin to decide if she should smoke, her anxiety would be the same regardless of how the coin landed. Also, is this functionally the same as Newcomb’s problem?
Generally, this is false.
If I took the time to write a comment laying out a decision theoretic problem and received a response like this (and saw it so upvoted), I would be pretty annoyed and suspect that maybe (though not definitely) the respondent was fighting the hypothetical, and that their flippant remark might change the tone of the conversation enough to discourage others engaging with my query.
I’ve been frustrated enough times by people nitpicking or derailing (even if only with not-supposed-to-be-derailing throwaway jokes) my attempts to introduce a hypothetical that by this point I would guess that in most cases it’s actually rude to respond like this unless you’re really, really sure that your nitpick of a premise actually significantly affects the hypothetical or that you’ve got a really good joke. In Should World people would evaluate the seriousness of a thought experiment on its merits and not by the immediate non-serious responses to it, but experience says to me that’s not a property of the world we actually live in.
If I’m interpreting your comment correctly, you’re either stating that it’s not the case that people’s brains make rational probability estimates (which everybody on friggin’ LessWrong will already know!), or denying a very specific, intentionally artificial statement about the relation between credences and anxiety that was constructed for a decision theory thought experiment. In either case I’m not sure what the benefits of your comment are.
Am I missing something that you and the upvoters saw in your comment?
Edit: Okay, it occurs to me that maybe you were making an extremely tongue-in-cheek, understated rejection of the premise for comical effect—‘Haha, the thought experiments we use are far divorced from the actual vagaries of human thought’. The fact I found it so hard to get this suggests to me that others probably didn’t get the intended interpretation of your comment, which still leaves potential for it to have the negative effects I mentioned above. (E.g. maybe someone got your joke immediately, had a hearty laugh, and upvoted, but then the other upvoters thought they were upvoting the literal interpretation of your post.)
I would guess that the common factor causes her to have an urge to smoke, and if she wants to find out whether she has cancer, it is irrelevant whether she actually smokes, she only has to see whether she has the urge. She has it, dang. Time to smoke.
You offer a reasonable interpretation, but the Smoking Lesion problem only becomes interesting if even after accounting for the urge to smoke, whether you actually smoke provides information on whether you are likely to get lung cancer.
If you know your source code, then whether you actually smoke cannot provide additional Bayesian information on whether you are likely to get lung cancer. Your decision is a logical consequence of your source code, and Bayesianism assumes logical omniscience. Humans in general don’t know their source code, but if Susan is using a formal decision theory to make this decision (which should be assumed since we’re asking what decision theory Susan should use), then she knows her source code for the purpose of making this decision.
If it’s not the urge, what is it? The decision algorithm? If so, I don’t think it can be meaningfully answered, as any answer you come up with it while thinking about it on the internet doesn’t apply to the smoker, as they’re using a different decision-making process.
I don’t think you’re taking the thought experiment seriously enough and are prematurely considering it (dis)solved by giving a Clever solution. E.g.
Obvious alternative that occurred to me in <5 seconds: It’s not the urge, it’s the actual act of smoking or knowing one has smoked. Even if these turn out not to not quite work, you don’t show any sign of having even thought of them, which I would not expect if you were seriously engaging with the problem looking for a reduction that does not leave us feeling confused.
Edit: In fact, James already effectively said ‘the act of smoking’ in the comment to which you were replying!
It’s not enough to say “the act of smoking”. What’s the causal pathway that leads from the lesion to the act of smoking?
Anyway, the smoking lesion problem isn’t confusing. It has a clear answer (smoking doesn’t cause cancer), and it’s only interesting because it can trip up attempts at mathematising decision theory.
Exactly, that’s part of the problem. You have a bunch of frequencies based on various reference classes, without further information, and you have to figure out how the agent should act on that very limited information, which does not include explicit, detailed causal models. Not all possible worlds are evenly purely causal, so your point about causal pathways is at best an incomplete solution. That’s the hard edge of the problem, and even if the correct answer turns out to be ‘it depends’ or ‘the question doesn’t make sense’ or involves a dissolution of reference classes or whatever, then one paragraph isn’t going to provide a solution and cut through the confusions behind the question.
It seems like your argument proves too much because it would dismiss taking Newcomb’s problem seriously. ‘It’s not enough to say the act of two-boxing...’ I don’t think your attitude would have been productive for the progression of decision theory if people had applied it to other problems that are more mainstream.
That’s exactly the point Wei Dai is making in the post I linked!! Decision theory problems aren’t necessarily hard to find the correct specific answers to if we imagine ourselves in the situation. The point is that they are litmus tests for decision theories, and they make us draw up more robust general decision processes or illuminate our own decisions.
If you had said
in response to Newcomb’s problem, then most people here would see this as a flinch away from getting your hands dirty engaging with the problem. Maybe you’re right and this is a matter whereof we cannot speak, but simply stating that is not useful to those who do not already believe that, and given the world we live in, can come across as a way of bragging about your non-confusion or lowering their ‘status’ by making it look like they’re confused about an easily settled issue, even if that’s not what you’re (consciously) doing.
If you told a group building robot soccer players that beating their opponents is easy and a five-year-old could do it, or if you told them that they’re wasting their time since the robots are using a different soccer-playing process, then that would not be very helpful in actually figuring out how to make/write better soccer-playing robots!
I’m not interesting in discussing with this level of condescension.
Both of these are invalidated by the assumption:
I’m not sure we disagree; as James said, whether one actually smokes provides information about lung cancer, which is possible regardless of whether smoking is the cause of the cancer. My comment was intended to be more general than causality.
But that would depend on other factors not just the probability of lung cancer. That depends on what her motivation is to smoke (relaxation, social partnerships, reducing her anxiety, stress management). In that case the temporary benefit gained from smoking may outweigh the Bayesian probability of her getting lung cancer which will take 20-30 to take hold (most likely) rather than using other means to mitigate her very present problems. If she is deciding to smoke or not based solely on the probability of getting lung cancer and her anxiety level about that the rational brain usually chooses to do things that reduce risk rather than increase it and if she is looking to reduce the anxiety level she should choose not to smoke because the chances of lung cancer derived from smoking go down. However, she could also use nicotine as a way to cope with her stress or anxiety which may provide a much better present relief and mitigate any anxiety she has about developing lung cancer in the future.
Looking for suggestions:
I’m brainstorming crafts or skills to try out as classes for my library. I’m trying to come up with a list of practical skills that my patrons might be interested in, such as knitting, sewing, scrap-booking, potentially gun maintenance and safety (the local police department sometimes holds classes of this nature).
I’m looking for suggestions of other such skills that might be useful for a library to offer to the general public. These are practical skills classes, so things such as investment or business start-ups are a little outside the scope of what we’re looking at providing. At least in this setting.
Any suggestions?
How to paint your house. Basic plumbing and electrical. How to check your car’s oil, radiator fluid. How to open an investing account rather than a savings account (very practical). Knife skills for the kitchen.
I’m wanting to start an investment class anyway. We have a lot of near-retirees panicking because the state’s pension plan they’re on is almost worthless. So a lot of patrons are wanting to find ways at the last minute to bolster their income. Not the ideal target audience, sure, but it’d be a start. I’m hoping to eventually attract the middle-aged and newly married who can really benefit from learning to invest now.
Who will teach the class?
“Here’s how to open a (insert online investing resource here) account. Here is what percentage of index funds vs bonds to buy at different ages and risk tolerances. Here are which things to buy. Do not buy stocks. Go forth and invest.”
You don’t need a financial professional to teach that, but you might need a good teacher. You have to take an unfamiliar process and make it familiar and approachable so that people actually do it.
What makes you sure an online investing (=trading) account is appropriate for a given person?
What makes you think a mix of (presumably) large-cap equity and bonds—and nothing else—in a certain proportion is an optimal mix looking forward?
Do tell. Which things to buy?
I think that the library class that will do people the most good is one that just gets people with more than a month’s budget in their bank account to open an investing account and buy literally two things, one low-risk moderate-return and one high-expected-return. My recommendations in the absence of actually doing research would be US federal bonds and an index fund of small-cap stocks—likely not optimal, but even more likely better than a saving’s account.
I have a few potential resources in the area. One from a local firm. With the right resources and help I could also personally present the materials.
I think all the needle crafts are handy:
Needlepoint Crochet Knitting Embroidery
I think that sewing and learning to make a dress, blouse, pants or how to hem your own pants is useful and most people don’t know how to sew anymore.
I would also consider reaching out to organizations to see who you can bring in and then just serve as a point of contact and outreach to the community.
I’m not sure that coming up with the skills yourself is the best idea. You could simply announce that you provide space for anyone to hold a class to teach whatever they want to teach.
Introductory programming perhaps? (Presumably you have some public computers in your library.)
I have a computer major friend who’s learning to make apps creating a few presentations for just such a class. I’m considering offering a tutoring service for students at the library and, if that works, going from there into basic programming skils. While I am not personally a skilled programmer, with the right resources I could teach the basics.
Is there a name for the following pattern?
Argument or just noticing confusion
“He looks way too confident, he’s probably better at the field or has significant information”
Catastrophic failure more or less matching my predictions
I seem to run into this a lot lately, but the alternative of assuming I’m correct seems even worse. I’m also often not in a position to ask about the source of their confidence.
Accurately judging the confidence that other people have in their own judgements isn’t easy. It can be that the other person just doesn’t care whether they are wrong.
In political discussions people often argue their positions quite strongly even if they don’t have good evidence for them. You might mistake that strong arguing as confidence when it’s rather the opposite.
Fight, flight and freeze responses are all instances of fear but look quite different. Especially shy people often confuse a fight response with confidence when it isn’t.
Would you care to post some predictions?
I didn’t explicitly make a prediction, but it sounds like https://news.ycombinator.com/item?id=8202234 would be an instance of the pattern.
Does anyone have experience with Wikipedia’s feature to create your own collection of articles in the form of a book? My attempts to create one with formulae and diagrams don’t look very nice and I’d like to have great Wikipedia articles ready on my Kindle for offline reading.
I compiled the Wikipedia book on atheism, and it appears to work fine:
https://en.wikipedia.org/wiki/Book:Atheism
Have you tried using Calibre combined with the Wiki Reader plug-in?
Not yet, but I’ll have a look. If I use that does the result look better than Wikipedia’s own tool?
From Venkatesh Rao: The Creation and Destruction of Habits
Interesting.
From…
...it seems to follow that the future (whatever kind of AGI, singularity, technological advance) will be ugly and sociopath. No wonder many fear this.
No one in their right mind would fix their health issues without consulting a physician, an expert in physical health. Yet we are very willing to do most of our lifes without consulting any other experts, even if we deeply care about most of it. What experts and/or assistants are very worth consulting either in terms of saved time acquiring relevant knowledge, ease of mind or greatly enhanced results? I am thinking along the lines of training alone versus having your physician assess your physical health once a year and contacting the trainer you meet once every month to adjust your exercise routine.
Quite a lot people do start diets without consulting a physician.
The tongue-in-cheek answer would be to say that they are not in their right mind.
But I’d rather ask: Is it worthwhile to consult a physician before a diet change? Since, from my lay understanding, dietary needs are highly individual I’d say yes. Except for the dietary change the vast majority of people need: To consume fewer calories and more vegetables.
Physicians can’t do magic. Dietary needs are to some degree individual but that doesn’t mean that your physician necessarily knows what’s best for you.
Healthy living is often about switching habits and that not something where physicians can help you much via a 15 minutes (or less) conversation.
If you suspect gluten insensitivity it can make sense to get tested by a physician but in many cases you just have to be aware of what’s happening with you. How does your body react to different kinds of food? What stands in the way of changing your habits?
Isn’t individuality of dietary needs reason not to consult a physician? In most cases it’s going to be impractical for a physician to study any individual patient’s requirements. They may also be legally or professionally prohibited from the kind of experimentation needed to find those requirements.
I see a lot of consulting in the following areas:
sports—esp. if happening in a fitness center
parental and relationship advice (at least it is offered quite a lot in Germany and we used it)
job and career advice—there is a whole profession for placing prospective youth into jobs and later on qutie some effort is made to help you find a job (though quality has declined a lot in Germany in the last years)
If reading scientific books counts as ‘consulting experts’ then quite a lot of topics (including diet above) count, or?
It very much does! But I was hoping to get some interesting suggestions, like specific books or professions, on here instead of the usual nagging that the question isn’t in the best possible format.
That just isn’t true. Every time you call a plumber or hand your car to an auto mechanic or go to a class, etc. etc. you consult an “other expert”.
Reliance on on other people has its own built-in problems—e.g. the agency issue or deciding which expert to pick (say, you want to lose weight—there is a large variety of experts all disagreeing with each other...).
Physicians are rarely interested in or trained to assess physical health. What they do instead is assess lack of disease which is a different thing.
Can someone link to a discussion, or answer a small misconception for me?
We know P(A & B) < P(A). So if you add details to a story, it becomes less plausible. Even though people are more likely to believe it.
However, If I do an experiment, and measure something which is implied by A&B, then I would think “A&B becomes more plausible then A”, Because A is more vague then A&B.
But this seems to be a contradiction.
I suppose, to me, adding more details to a story makes the story more plausible if those details imply the evidence. Sin(x) is an analytic function. If I know a complex differentiable function has roots on all multiples of pi, Saying the function is satisfied by Sin is more plausible then saying it’s satisfied by some analytic function.
I think...I’m screwing up the semantics, since sin is an analytic function. But this seems to me to be missing the point.
I read a technical explanation of a technical explanation, so I know specific theories are better then vague theories (provided the evidence is specific). I guess I’m asking for clarifications on how this is formally consistent with P(A) > P(A&B).
A&B gains more evidence than A from the experiment. It doesn’t (and can’t) become more probable.
Let’s have an example. Someone is flipping a coin repeatedly. The coin is either a fair one or a weighted one that comes up heads 3x as often as tails. (A = “coin is weighted in this way”.) The person doing the flipping might be honest, or might be reporting half the tails she flips (i.e., each one randomly with p=1/2) as heads. (B = “person is cheating in this way”.)
Let’s say that ahead of time you think A and B independently have probability 1⁄10.
Your experiment consists of getting the (alleged) results of a single coin flip, which you’re told was heads.
So. Beforehand the probability of A was 1⁄10 and that of B was 1⁄100.
The probability of your observed results is: 1⁄2 under (not-A,not-B); 3⁄4 under (not-A,B); 3⁄4 under (A.not-B); and 7⁄8 under (A,B).
So the posterior probabilities for the four possibilities are proportional to (81:9:9:1) times (4:6:6:7); that is, to (324:54:54:7). Which means the probability of A has gone up from 10% to about 14%, and the probability of A&B from 1% to about 1.6%.
So you’ve got more evidence for A&B than for A, which translates (more or less) to a larger relative gain in probability for A&B than for A. But A&B is still less likely.
If you repeat the experiment and keep getting heads, then A&B will always improve more than A alone. But the way this works is that after a long time almost all the probability of A comes from the case where A&B, so that A&B’s advantage in increase-in-probability gradually goes away.
So plausibility isn’t the only dimension for assessing how “good” a belief is.
A or not A is a certainty. I’m trying to formally understand why that statement tells me nothing about anything.
The motivating practical problem came from this question,
“guess the rule governing the following sequence” 11, 31, 41, 61, 71, 101, 131, …
I cried, “Ah the sequence is increasing!” With pride I looked into the back of the book and found the answer “primes ending in 1″.
I’m trying to zone in on what I did wrong.
If I had said instead, the sequence is a list of numbers—that would be stupider, but well inline with my previous logic.
My first attempt at explaining my mistake, was by arguing “it’s an increasing sequence” was actually less plausible then the real answer, since the real answer was making a much riskier claim. I think one can argue this without contradiction (the rule is either vague or specific, not both).
However, it’s often easy to show whether some infinite product is analytic. Making the jump that the product evaluates to sin, in particular, requires more evidence. But in some qualitative sense, establishing that later goal is much better. My guess was that establishing the equivalence is a more specific claim, making it more valuable.
In my attempt to formalize this, I tried to show this was represented by the probabilities. This is clearly false.
What should I read to understand this problem more formerly, or more precisely? Should I look up formal definitions of evidence?
“S is an increasing sequence” is a less specific hypothesis than “S consists of all prime numbers whose decimal representations end in 1, in increasing order”. But “The only constraint governing the generation of S was that it had to be an increasing sequence” is not a less specific hypothesis than “The only constraint governing the generation of S was that it had to consist of primes ending in 1, in increasing order”.
If given a question of the form “guess the rule governing such-and-such a sequence”, I would expect the intended answer to be one that uniquely identifies the sequence. So I’d give “the numbers are increasing” a much lower probability than “the numbers are the primes ending in 1, in increasing order”. (Recall, again, that the propositions whose probabilities we’re evaluating aren’t the things in quotation marks there; they’re “the rule is: the numbers are increasing” and “the rule is: the numbers are the primes (etc.)”.
Moving back to your question about analytic functions: Yes, more specific hypotheses may be more useful when true, and that might be a good reason to put effort into testing them rather than less specific, less useful hypotheses. But (as I think you appreciate) that doesn’t make any difference to the probabilities.
The subject concerned with the interplay between probabilities, preferences and actions is called decision theory; you might or might not find it worth looking up.
I think there’s some philosophical literature on questions like “what makes a good explanation?” (where a high probability for the alleged explanation is certainly a virtue, but not the only one); that seems directly relevant to your questions, but I’m afraid I’m not the right person to tell you who to read or what the best books or papers are. I’ll hazard a guess that well over 90% of philosophical work on the topic has close to zero (or even negative) value, but I’m making that guess on general principles rather than as a result of surveying the literature in this area. You might start with the Stanford Encyclopedia of Philosophy but I’ve no more than glanced at that article.
I think of it in terms of making a $100 bet.
So you have the sequence S: 11, 31, 41, 61, 71, 101, 131.
A: is the “bet” (i.e. hypothesis) that the sequence is increasing by primes ending in 1. There are very few sequences (below the number 150) you can write where you have an increasing sequence of primes ending in 1, so your “bet” is to go all in.
B: is the “bet” that the sequence is increasing. But a “sequence that’s increasing” spreads more of its money around so it’s not a very confident bet. Why does it spread more of its money around?
If we introduced a second sequence X: 14, 32, 42, 76, 96, 110, 125
You can still see that B can account for this sequence as well, whereas A does not. So B has to at least spread its betting money between the two sequences presented A and X just in case either of those are the answer presented in the back of the book. In reality there are an untold amount of sequences that B can account for besides the two here. Meaning that B has to spread its betting money to all of those sequences if B wants to “win” by “correctly guessing” what the answer was in the back of the book. This is what makes it a bad bet; a hypothesis that is too general.
This is a simple mathematical way you can compare the two “bets” via conditional probabilities:
Pr(B | S) + Pr(B | X) + Pr(B | ??) = 1.00 and Pr(A | S) + Pr(A | X) + Pr(A | ??) = 1.00
Pr(A | S) is already all in because the A bet only fits something that looks like S. Pr(B | S) is less than all in because Pr(B | X) is also a possibility as well as any other increasing sequence of numbers, Pr(B | ???). This is a fancy way of saying that the strength of a hypothesis lies in what it can’t explain, not what it can; ask not what your hypothesis predicts, but what it excludes.
Going by what each bet excludes you can see that Pr(A | ??) < Pr(B | ??), even if we don’t have any hard and fast number for them. While there is a limited amount of 7 number patterns below 150 that are increasing, this is a much larger set than the amount of 7 number patterns below 150 that are increasing by primes ending in 1.
A&B cannot be more probable than A, but evidence may support A&B more than it supports A.
For example, suppose you have independent prior probabilities of 1⁄2 for A and for B. The prior probability of A&B is 1⁄4. If you are then told “A iff B,” the probability for A does not change but the probability of A&B goes up to 1⁄2.
The reason specific theories are better is not that they are more plausible, but that they contain more useful information.
A more specific explanation is better than a general explanation in the scientific sense exactly because it is more easily falsifiable. Your sentence
is completely wrong, as the set containing the sine function is most certainly contained in the set of all analytic functions, making it more plausible that “some analytic function has roots at all multiples of pi” than to say the same of sine, assuming we do not already know a great deal of information about sine.
Plain and simply no. If evidence E implies A and B, formally E → A&B, then seperately E → A and E → B are true, increasing the probability of both seperately, making your conclusion invalid.
If A,B,C are binary, values of A and B are drawn from independent fair coins, and C = A XOR B, then measuring C = 1 constrains A,B to be either { 1, 1 } or { 0, 0 }, but does not constrain A alone at all.
Before we conditioned on C=1, all values of the joint variable A,B had probabilities 0.25, and all values of a single variable A had probabilities 0.5. After we conditioned on C=1, values { 0, 0 } and { 1, 1 } of A,B assume probabilities 0.5, and values { 0, 1 } and { 1, 0 } of A,B assume probabilities 0, values of a single variable A remain at probability 0.5.
By conditioning on C=1, you learn more about the joint variable A,B than about a single variable A (because your posterior for A,B changed, but your posterior for A did not), but that is not the same thing as the joint variable A,B being more plausible than the single variable A. In fact, it is still the case that p(A & B | C) ⇐ p(A | C) for all values of A,B.
edit: others below said the same, and often better.
I think we tend to intuitively “normalize” the likelihood of a complex statement. Our prior is probably Kolmogorov complexity, so if A is a 2-bit statement and B is a 3-bit statement, we would “expect” the probabilities to be P(A)=1/4, P(B)=1/8, P(A&B)=1/32. If our evidence leads us to adjust to say P(A)=1/3, P(A&B)=1/4, then while A&B is still less likely than A, there is some sense in which A&B is “higher above baseline”.
Coming from the other end, predictions, this sort of makes sense. Theories that are more specific are more useful. If we have a theory that this sequence consists of odd numbers, that lets us make some prediction about the next number. If our theory is that the numbers are all primes, we can make a more specific, and therefore more useful, prediction about the next number. So even though the theory that the sequence is odd is more likely than the theory that the sequence is prime, the latter is more useful. I think that’s where the idea that specific theories are better than vague theories comes from.
P(A & B) ⇐ P(A)
I’m guessing that the rule P(A & B) < P(A) is for independent variables (though it’s actually more accurate to say P(A & B) ⇐ P(A) ). If you have dependent variables, then you use Bayes Theorem to update. P(A & B) is different from P(A | B). P(A & B) ⇐ P(A) is always true, but not so for P(A | B) viz. P(A).
This is probably an incomplete or inadequate explanation, though. I think there was a thread about this a long time ago, but I can’t find it. My Google-fu is not that strong.
Not so. Stories usually are considerably more complicated than can be represented as ANDing of probabilities.
A simple example: Someone tells me that she read my email to Alice, let’s say I think that’s X% plausible. But then she adds details: she says that the email mentioned a particular cafe. This additional detail makes the plausibility of this story skyrocket (since I do know that the email did mention that cafe).
So maybe it’s worth saying explicitly what’s going on here: You’re comparing probabilities conditional on different information.
A = “Beth read my email to Alice”. B = “Beth knows that my email to Alice mentioned the Dead Badger Cafe”. I = “Beth told me she read my email to Alice”. J = “Beth told me my email to Alice mentioned the Dead Badger Cafe”.
Now P(A&B|I) < P(A|I), and P(A&B|I&J) < P(A|I&J), but P(A&B|I&J) > P(A|I).
So there’s no contradiction; there’s nothing wrong with applying probabilities; but if you aren’t careful you can get confused. (For the avoidance of doubt, I am not claiming that Lumifer is or was confused.)
And, yes, I bet this sort of conditional-probability structure is an important part of why we find stories more plausible when they contain lots of details. Unfortunately, the way our brains apply this heuristic is far from perfect, and in particular it works even when we can’t or won’t check the details and we know that the person telling us the story knows this. So it leads us astray when we are faced with people who are unscrupulous and good at lying.
Um. I was just making a point that “we know P(A & B) ⇐ P(A)” is a true statement coming from math logic, while “if you add details to a story, it becomes less plausible” is a false statement coming from human interaction.
Not sure about your unrolling of the probabilities since P(B|A) = 1 which makes A and B essentially the same. If you want to express the whole thing in math logic terms you need notation as to who knows what.
My reading of polymer’s statement is that he wasn’t using “plausible” as a psychological term, but as a rough synonym for “probable”. (polymer, if you’re reading: Was I right?)
No, P(B|A) is a little less than 1 because Beth might have read the email carelessly, or forgotten bits of it.
[EDITED to add: If whoever downvoted this would care to explain what they found objectionable about it, I’d have more chance of fixing it. It looks obviously innocuous to me even on rereading. Thanks!]
I’m not quite sure what the following means:
I don’t care whether it’s false as a “human interaction”. I care whether the idea can be modeled by probabilities.
Is my usage of the word plausible in this way really that confusing? I’d like to know why… Probable, likely, credible, plausible, are all (rough) synonyms to me.
My sister is interested in environmental charities, a category which Givewell has no recommendations about. Does anyone know of any actually good ones?
What draws her to environmental charities? Concern for animals? Concern for humans? Fighting global warming with its likely negative effects on both?
Before GiveWell/whoever can make a recommendation they need to know what the person wants. The best environmental charity for preventing species extinction is going to be very different than the best one for preventing animal suffering.
Good point! When I asked her earlier she said she wanted to save the rainforest to stop global warming, but I don’t think she’s completely inflexible about this.
“she wanted to save the rainforest to stop global warming”
Katja Grace (of Meteuphoric) did some research for Giving What We Can looking into climate change charities. She wrote up her findings as a blog post.
Thank you, this is very useful!
/facepalm
The top rated givewell charity for climate change actually seems to do rainforest protection.
Do you have a link? I am not convinced that they have ever considered a climate change charity.
Oh, sorry I mistook the Give What We Can blogpost for Givewell.
My comment in the previous thread.
I’d like to share a finished portion of my “Useful Idea of Truth” video for minor feedback, if people are interested in seeing it. The Sally-Anne task explained.
Let me know what you like, what you don’t, and if you think these videos will be worth the time I put into them.
In the context of Pixar’s upcoming movie Inside Out, I just discovered the existence of a 1990s sitcom titled Herman’s Head. I’ve watched a few episodes and it’s hilarious to see how it represents the battle of agents in the mind. Sometimes they even include mental models of other people. I’m very excited to see how Pixar will do it.
Ignore this. It is a test of embedding images in LessWrong’s comments.
I have just set up a tumblr account (coffeespoonsposts)! Any recommendations for good tumblrs?
I am working on the post neoliberal wikipedia article and I wanted to see what people thought should be included. I’ve pulled together some resources and its something of interest and so I’d like to see what people think should be included including prominent theorists and so on.
This comment is the first time I’ve ever heard the term “post-neoliberal”.
What exactly is post-neoliberalism?
Post-neoliberialism is the economic theories and policies that are coming out of the neoliberal (think Milton Friedman) economics of the mid 20th century. This is most pronounced in South America where Neoliberal thinking as proposed by the Chicago School (Milton Friedman and company) was most widely practiced. Many of these governments are trying to balance the need of industry and companies to support the countries they do business in as well as provide for growth and jobs thereby. They are trying to balance the socialist past with free markets and globalization. So this article will address common theories and their theorists, policies, and other decisions being made in regards to the move away from Neoliberal economic philosophy.
Do you actually mean “coming out of” or do you mean “expected to replace”? Because economic theories coming out of neoliberal economics would probably be called “neoliberal”.
Anywhere other than Chile?
All in all you seem to talking about economic theories of development. There are a lot of those. Why do you think it’s useful to stick a “post-neoliberal” moniker on them, especially given that Wikipedia seems to think that “neoliberal” is a pejorative term used mostly by people who don’t like markets?
Perú and Colombia for the last couple decades.
Any links or other evidence that Peru and Colombia were practicing specifically neoliberalism and not just sane policy (e.g. based on the idea that “markets work pretty well most of the time, better than the alternatives, anyway”)?
What “sane” happens to mean depends a lot on who’s judging.
The IMF certainly pushed Latin American countries towards adopting neoliberal policies that they otherwise wouldn’t have implemented.
True.
That happens a lot when you want to borrow other people’s money :-)
Assorted links:
http://www.solidarity-us.org/site/node/709
http://colombiajournal.org/colombias-neoliberal-madness.htm
http://www.theguardian.com/global-development/poverty-matters/2011/jun/06/colombia-amoral-development-uribe
http://www.cepal.org/publicaciones/xml/1/20061/dancourt.pdf
http://upsidedownworld.org/main/peru-archives-76/4956-peru-passes-a-packet-of-neoliberal-reforms-erodes-environmental-protections-and-labor-rights
http://citizenspress.org/editorials/neoliberalism-in-latin-america
Incidentally, we appear to have very different ideas of what constitutes sane policy. I support state control over essential services (education, healthcare, agriculture, energy) and strong regulations for everything else.
Ah, I see.
The US is a neoliberal country, then, is it?
It’s very difficult to give an answer. For example, drug and food safety regulations are stricter in the U.S. than in Colombia (drugs are sold here which the FDA wouldn’t poke with a ten-meter-long stick). The U.S. subsidizes its own farmers while discouraging its trade partners from doing so. Its minimum wage, albeit heavily criticized, is three times that of Colombia. On the other hand, the U.S. finds itself in the indefensible position of being the only first-world country without universal healthcare, while Colombia has mandatory vacations, paid sick days, and better unionization.
The difficulty suggests to me that you concept of neoliberalism is not well-defined.
But, frankly, it looks to me like you’re treating it as “not socialism”.
And I still have no idea what post-neoliberalism is.
If you would ask whether the US is a conservative country, answering isn’t easy either. That doesn’t mean that “conservative” is a word without meaning.
One of the points of the Washington Consensus is “Redirection of public spending from subsidies (“especially indiscriminate subsidies”) toward broad-based provision of key pro-growth, pro-poor services like primary education, primary health care and infrastructure investment;”
The US themselves doesn’t manage to muster the political will to cut agricultural subsidies themselves but that doesn’t mean that it doesn’t use institutions such as the IMF to pressure other countries into cutting agricultural subsidies.
The word neoliberalism came up for policies that the US pushed in Latin America. Often against the local democratic will.
Socialism is in it’s nature for open borders. “The Internationale” calls for uniting different people. Neoliberalism is similar. It wants to get rid of borders.
Treating corporations as people who can sue states in Investor-state dispute settlement proceedings is part of what neoliberalism is about. Clauses that were put into law in the US through trade agreements instead of the normal democratic process.
If you would ask Ron Paul about Investor-state dispute settlement I’m pretty sure that he would reject the idea because it means that states give up part of their sovereignty. As a result Ron Paul is no neoliberal. Ron Paul is also for free trade but he was a very different idea of what free trade is supposed to mean then the neoliberals.
I don’t know his position on that issue, but he’s certainly in favor of people suing government agencies that he believes have abused their power.
Ron Paul generally votes against Free Trade treaties because they mean giving up sovereignty. He certainly thinks that it’s alright when the supreme court takes down a law for being unconstitutional but that’s not what happens in Investor-state dispute settlement.
This is not about suing government agencies for violating the law. It’s about suing states for having laws that go against things written down in a free trade treaty in secret international courts without juries. The Canadian government thought that a pesticide from Dow was harmful so they forbid it. Then Investor-state dispute settlement allowed Dow to sue over the lost sales.
Ron Paul also favors states rights which don’t exist when some international court can simply punish a state for it’s policy decisions that are in agreement with the state constitution and law.
As far as I can make out, the “post-” prefix in words such as postneoliberal, postcolonial, postmodern, etc. means not merely “after”, but also “in reaction or opposition to”, with connotations of supercession of the old and bad by the superior new and good. Claiming the “post-” moniker for oneself is a way of linguistically framing the situation (that is, casting a magic spell) to define oneself into having the high moral ground.
Sure, but you’re talking about verbal jiu-jitsu techniques, basically. However here I just don’t know which meaning the OP wants to associate with the label “post-liberal”. There are a lot of ways the “superior new and good” can play out.
So is Friedman and the Chicago School supposed to be Neoliberal, or Post Neoliberal?
Milton Friedman would be neoliberal.
From Googling the term it seems to be a word for South American politics. South American issues are generally not strongly debated in the English speaking internet. You probably find a better reception in some Spanish speaking venues.
I’ve never heard Milton Friedman, or anyone else prior to this, refer to Milton Friedman as a post neoliberal or neoliberal. And I’ve read a fair bit of Friedman, and watched many interviews of him.
It not a label that he uses to describe himself, the word neoliberal is mostly used by people on a left to label people like Milton Friedman.
So he’s a neoliberal?
So who is a post neoliberal?
And what’s the difference?
21st century socialist Latin American politicians like Luiz Inácio Lula da Silva would be post-neoliberal.
Joseph Stiglitz might qualify as far as thinkers go but most are probably Latin Americans we have never heard of.
Post neoliberal thought seeks to reincorporate some of the Keynesian economic policies that had been popular while preserving the competitiveness and growth potential that neoliberal economics offers. I’m still looking at foremost post-neoliberal theorists.
I find that a little difficult but yes the other comment is true it’s more economic academic jargon.