Open thread, August 4 − 10, 2014
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
- Open thread, 11-17 August 2014 by 11 Aug 2014 10:12 UTC; 9 points) (
- 8 Aug 2014 17:03 UTC; 4 points) 's comment on Welcome to Less Wrong! (6th thread, July 2013) by (
I wrote a userscript / Chrome extension / zero-installation bookmarklet to make finding recent comments over at Slate Star Codex a lot easier. Observe screenshots. I’ll also post this next time SSC has a new open thread (unless Yvain happens to notice this).
Great idea and nicely done! It also had the additional benefit of constituting my very first interaction with javascript because I needed to modify somethings. (Specifically, avoid the use of localStorage.)
I’m curious what you used instead (cookies?), or did you just make a historyless version? Also, why did you need that? localStorage isn’t exactly a new feature (hell, IE has supported it since version 8, I think).
It appears that my Firefox profile has some security features that mess with localStorage in a way that I don’t understand. I used Greasemonkey’s GM_[sg]etValue instead. (Important and maybe obvious, but not to me: their use has to be desclared with @grant in the UserScript preamble.)
This looks excellent.
I tried downloading it by clicking on “install the extension”, but it doesn’t seem to get to my browser (Chrome). Am I missing something?.
“Install the extension” is a link bringing you to the chrome web store, where you can install it by clicking in the upper-right. The link is this, in case it’s Github giving you trouble somehow.
If the Chrome web store isn’t recognizing that you’re running Chrome, that’s probably not a thing I can fix, though you could try saving this link as something.user.js, opening chrome://extensions, and dragging the file onto the window.
Thank you. That worked. I never would have guessed that an icon which simply had the word “free” on it was the download button.
Would it be worth your while to do this for LW? It makes me crazy that the purple edges for new comments are irretrievably lost if the page is downloaded again.
Sure. Remarkably little effort required, it turned out. (Chrome extension is here.)
I guess I’ll make a post about this too, since it’s directly relevant to LW.
This doesn’t seem to handle stuff deep enough in the reply chain to be behind “continue this thread” links. On the massive threads where you most need the thing, a lot of the discussion is going to end up beyond those.
It seems to work for me. “Continue this thread” brings you to a new page, so you’ll have to set the time again, is all. Comments under a “Load more” won’t be properly highlighted until you click in and out of the time textbox after loading them.
The use case is that I go to the top page of a huge thread, the only new messages are under a “Continue this thread” link, and I want the widget to tell me that there are new messages and help me find them. I don’t want to have to open every “Continue” link to see if there are new messages under one of them.
Ah. That’s much more work, since there’s no way of knowing if there’s new comments in such a situation without fetching all of those pages. I might make that happen at some point, but not tonight.
Thanks very much. I think there’s an “unpack the whole page” program somewhere. Anyone remember it?
Thanks a million!
In your open thread inbox, less wrong comments have the options “context” and “report” (in that order), whereas private messages have “report” and “reply” (in that order). Many times I’ve accidentally pressed “report” on a private message, and fortunately caught myself before continuing.
I’d suggest reversing the order of “report” and “reply”, so that they fit with the comments options.
Right, that’s my tiny suggestion for this month :-)
I wrote a userscript to add a delay and checkbox reading “I swear by all I hold sacred that this comment supports the collective search for truth to the very best of my abilities.” before allowing you to comment on LW. Done in response to a comment by army1987 here.
Edit: per NancyLebovitz and ChristianKl below, solicitations for alternative default messages are welcomed.
“To the very best of my abilities” seems excessive to me, or at least I seem to do reasonably well with “according to the amount of work I’m willing to put in, and based on pretty good habits”.
I’m not even sure what I could do to improve my posting much. I could be more careful to not post when I’m tired or angry, and that probably makes sense to institute as a habit. On the other hand, that’s getting rid of some of the dubious posting, which is not the same thing as improving the average or the best posts.
Even when I’d only been here a few weeks, your posting had already caught my eye as unusually mindful & civil, and nothing since has changed my impression that you’re far better than most of us at conversing in good faith and with equanimity.
Given the recent discussion about how rituals can give the appearance of cultishness, it’s probably not good time to bring that up at the moment ;)
Testing this...
Nope, doesn’t seem to work. (I am probably doing something wrong as I never used Greasemonkey before.)
Just tested this on a clean FF profile, so it’s almost certainly something on your end. Did you successfully install the script? You should’ve gotten an image which looks something like this, and if you go to Greasemonkey’s menu while on a LW thread, you should be able to see it in the list of scripts run for that page. Also, note that you have to refresh/load a new page for it to show up after installation.
Oh, and it only works for new comments, not new posts. It should look something like this, and similarly for replies.
ETA: helpful debugging info: if you can, let me know what page it’s not working on, and let me know if there’s any errors in the developer console (shift-control-K or command-option-K for windows and Mac respectively).
I had interpreted “Save this file as” in an embarrassingly wrong way. It works now!
(Maybe editing the comment should automatically uncheck the box, otherwise I can hit “Reply”, check the box straight away, then start typing my comment.)
Does anyone know if something urgent has been going on with MIRI, other than the Effective Altruism Summit? I am a job application candidate—I have no idea about my status as one. But I was promised a chat today, days ago, and nothing was arranged regarding time or medium. Now it is the end of the day. I sent my application weeks ago and have been in contact with 3 of the employees who seem to work on the management side of things. This is a bit frustrating. Ironically, I applied as Office Manager, and hope that (if hired) I would be doing my best to take care of these things—putting things on a calendar, working to help create a protocol for ‘rejecting’ or ‘accepting’ or ‘deferring’ employee applications, etc. Have other people had similar, disorganized correspondences with MIRI? Or have they mostly been organized, suggesting that I take this experience as a sure sign of rejection?
Yes.
Apparently, in the days leading up to the Effective Altruism Summit, there was a conference on Artificial Intelligence keeping the research associates out of town. The source is my friend interning at the MIRI right now. So, anyway they might have been even busier than you thought. I hope this has cleared up now.
Still haven’t heard anything back from them in any sort of way. But thanks for making their circumstances even more clear!
Heard back & talked with them. My personal issue is now resolved.
Oblique request made without any explanation: can anyone provide examples of beliefs that are incontrovertibly incorrect, but which intelligent people will nonetheless arrive at quite reasonably through armchair-theorising?
I am trying to think up non-politicised, non-controversial examples, yet every one I come up with is a reliable flame-war magnet.
ETA: I am trying to reason about disputes where on the one hand you have an intelligent, thoughtful person who has very expertly reasoned themselves into a naive but understandable position p, and on the other hand, you have an individual who possesses a body of knowledge that makes a strong case for the naivety of p.
What kind of ps exist, and do they have common characteristics? All I can come up with are politically controversial ps, but I’m starting my search from a politically-controversial starting point. The motivating example for this line of reasoning is so controversial that I’m not touching it with a shitty-stick.
Mathematical arguments happen all the time over whether 0.99999...=1 but I’m not sure if that’s interesting enough to count for what you want.
That “0.99999....” represents a concept that evaluates to 1 is a question of notation, not mathematics. 0.99999… does not inherently equal 1; rather, by convention, it is understood to mean 1. The debate is not about the territory, it is about what the symbols on the map mean.
Where does one draw the line, if at all? “1+1 does no inherently equal 2; rather, by convention, it is understood to mean 2. The debate is not about the territory, it is about what the symbols on the map mean.” It seems to me like that—very ‘mysteriously’—people who understand real analysis never complain “But 0.999… doesn’t equal 1″; sufficient mathematical literacy seems to kill any such impulse, which seems very telling to me.
Yes, and that’s a case of “you don’t understand mathematics, you get used to it.” Which applies exactly to notation and related conventions.
Edit:
More specifically, if we let a_k=9/10^k, and let s_n be the sum from k=1 to n of a_k, then the limit of s_n as n goes to infinity will be 1, but 1 won’t be in {s_n|n in R}.
When somebody who is used to calculus sees ”.99...” What they are thinking of is the limit, which is 1.
But before you get used to that, most likely what you think of is some member of {s_n|n in R} with an n that’s large enough that you can’t be bothered to write all the nines, but which is still finite.
Exactly. The arguments about whether 0.99999.… = 1 are lacking a crucial item: a rigorous definition of what “0.9999...” refers to. The argument isn’t “Is the limit as n goes to infinity of sum from 1 to n of 9*10^-n equal to 1?” It’s “Here’s a sequence of symbols. Should we assign this sequence of symbols the value of 1, or not?” Which is just a silly argument to have. If someone says “I don’t believe that 0.9999.… = 1“, the correct response (unless they have sufficient real analysis background) is not “Well, here’s a proof that of that claim”, it’s “Well, there are various axioms and definitions that lead to that being treated as being equal to 1”.
It’s not. The “0.999… doesn’t equal 1” meme is largely crackpottery, and promotes amateur overconfidence and (arguably) mathematical illiteracy.
Terms are precious real estate, and their interpretations really are valuable. Our thought processes and belief networks are sticky; if someone has a crap interpretation of a term, then it will at best cause unnecessary friction in using it (e.g. if you define the natural numbers to include −1,...,-10 and have to retranslate theorems because of this), and at worst one will lose track of the translation between interpretations and end up propagating false statements (“2^n can sometimes be less than 2 for n natural”)
It would be an accurate response (even if not the most pragmatic or tactful) to say, “Sorry, when you pin down what’s meant precisely, it turns out to be a much more useful convention to define the proposition 0.999...=1 such that it is true, and you basically have to perform mental gymnastics to try to justify any usage where it’s not true. There are technically alternative schemas where this could fail or be incoherent or whatever, but unless you go several years into studying math (and even then maybe only if you become a logician or model theorist or something), those are not what you’ll be encountering.”
One could define ‘marble’ to mean ‘nucleotide’. But I think that somebody who looked down on a geneticist for complaining about people using ‘marble’ as if it means ‘nucleotide’, and who said it was a silly argument as if the geneticist and the person who invented the new definition were Just As Bad As Each Other, would be mistaken, and I would suspect they were more interested in signalling their Cleverness via relativist metacontrarianism than getting their hands dirty figuring out the empirical question of which definitions are useful in which contexts.
Actually, I could imagine you reading that comment and feeling it still misses your point that 0.999… is undefined or has different definitions or senses in amateur discussions. In that case, I would point to the idea that one can makes propositions about a primitive concept that turn out to be false about the mature form of it. One could make claims about evidence, causality, free will, knowledge, numbers, gravity, light, etc. that would be true under one primitive sense and false under another. Then minutes or days or month or years or centuries or millennia later it turns out that the claims were false about the correct definition.
It would be a sin of rationality to assume that, since there was a controversy over definitions, and some definitions proved the claim and some disproved it, that no side was more right than another. One should study examples of where people made correct claims about fuzzy concepts, to see what we might learn in our own lives about how these things resolve. Were there hints that the people who turned out to be incorrect ignored? Did they fail to notice their confusion? Telltale features of the problem that favoured a different interpretation? etc.
A lot (in fact, all of them that don’t involve a rigorous treatment of infinite series) of the “proofs” that it does equal 1 are fallacious, and so the refusal to accept them is actually a reasonable response.
You seem to making an assertion about me in your last paragraph, but doing so very obliquely. Your analogy is not very good, as people do not try to argue that one can logically prove that “marble” does not mean “nucleotide”, they just say that it is defined otherwise.
If we’re analogizing ”.9999… = 1″ to “marble doesn’t mean’t nucleotide”, then ”
Apologies for that. I don’t think that that specific failure mode is particularly likely in your case, but it seems plausible to me that other people thinking in that way has shifted the terms of discourse such that that form of linguistic relativism is seen as high-status by a lot of smart people. I am more mentioning it to highlight the potential failure mode; if part of why you hold your position is that it seems like the kind of position that smart people would hold, but I can account for those smart people holding it in terms of metacontrarianism, then that partially screens off that reason for endorsing the smart people’s argument.
It looks like you submitted your comment before you meant to, so I shall probably await its completion before commenting on the rest.
And yet I somehow doubt most of these people reject connectedness.
I thought about this on & off over the last couple of days and came up with more candidates than you can shake a shitty stick at. Some of these are somewhat political or controversial, but I don’t think any are reliable flame-war magnets. I expect some’ll ring your cherries more than others, but since I can’t tell which, I’ll post ’em all and let you decide.
The answer to the Sleeping Beauty puzzle is obviously 1⁄2.
Rational behaviour, being rational, entails Pareto optimal results.
Food availability sets a hard limit on the number of kids people can have, so when people have more food they have more kids.
Truth is an absolute defence against a libel accusation.
If a statistical effect is so small that a sample of several thousand is insufficient to reliably observe it, the effect’s too small to matter.
Controlling for an auxiliary variable, or matching on that variable, never worsens the bias of an estimate of a causal effect.
Human nature being as brutish as it is, most people are quite willing to be violent, and their attempts at violence are usually competent.
In the increasingly fast-paced and tightly connected United States, residential mobility is higher than ever.
The immediate cause of death from cancer is most often organ failure, due to infiltration or obstruction by spreading tumours.
Aumann’s agreement theorem means rationalists may never agree to disagree.
Friction, being a form of dissipation, plays no role in explaining how wings generate lift.
Seasons occur because Earth’s distance from the Sun changes during Earth’s annual orbit.
Beneficial mutations always evolve to fixation.
Multiple discovery is rare & anomalous.
The words “male” & “female” are cognates.
Given the rise of online piracy, the ridiculous cost of tickets, and the ever-growing convenience of other forms of entertainment, cinema box office receipts must be going down & down.
Looking at voting in an election from the perspective of timeless decision theory, my voting decision is probably correlated and indeed logically linked with that of thousands of people relatively likely to agree with my politics. This could raise the chance of my influencing an election above negligibility, and I should vote accordingly.
The countries with the highest female life expectancies are approaching a physiologically fixed hard limit of 65 — sorry, 70 — sorry, 80 — sorry, 85 years.
The answer to the Sleeping Beauty puzzle is obviously 1⁄3.
Language in general might be a rich source of these, between false etymologies, false cognates, false friends, and eggcorns.
Thanks for that list. I believed (or at least, assigned a probability greater than 0.5 to) about five of those.
Thanks for this. These are all really good.
Now I just need to think of another 21 and I’ll have enough for a philosophy article!
… don’t they? (in the long run)
No, they don’t—look at contemporary Western countries and their birth rates.
Oh yes I know that, I just meant in the long-long run. This voluntary limiting of birth rates can’t last for obvious evolutionary reasons.
I have no idea about the “long-long” run :-)
The limiting of birth rates can last for a very long time as long as you stay at replacement rates. I don’t think “obvious evolutionary reasons” apply to humans any more, it’s not likely another species will outcompete us by breeding faster.
Any genes that make people defect by having more children are going to be (and are currently being) positively selected.
Besides, reducing birthrates to replacement isn’t anything near a universal phenomenon, see the Mormons and Amish.
It’s got nothing to do with another species out-competing us—competition between humans is more than enough.
This observation should be true throughout the history of the human race, and yet the birth rates in the developed countries did fall off the cliff...
And animals don’t breed well in captivity.
Until they do.
This happened barely half a generational cycle ago. Give evolution time.
So what’s your prediction for what will happen when?
In the “long-long run”, given ad hoc reproductive patterns, yeah, I’d expect evolution to ratchet average human fertility higher & higher until much of humanity slammed into the Malthusian limit, at which point “when people have more food they have more kids” would become true.
Nonetheless, it isn’t true today, it’s unlikely to be true for the next few centuries unless WWIII kicks off, and may never come to pass (humanity might snuff itself out of existence before we go Malthusian, or the threat of Malthusian Assured Destruction might compel humanity to enforce involuntary fertility limits). So here in 2014 I rate the idea incontrovertibly false.
That’s a tall order. I’ll try:
Noticing that people who are the best in any sport practice the most and concluding that being good at a sport is simply a matter of practice and determination. Tabula Rasa in general.
The supply-demand model of minimum wage? Is this political? I’m not saying minimum wage is good or bad, just that the supply-demand model can’t settle the question yet people learning about economics tend to be easily convinced by the simple explanation.
That thermodynamics proves that weight loss + maintenance is simply a matter of diet and exercise (this is more Yudkowsky’s fight than mine).
I doubt it is possible to find non-controversial examples of anything, and especially of things plausible enough to be believed by intelligent non-experts, outside of the hard sciences.
If this is true, the only plausible examples would be such as “an infinity cannot be larger than another infinity”, “time flows uniformly regardless of the observer”, “biological species have unchanging essences”, and other intuitively plausible statements unquestionably contradicted by modern hard sciences.
Most drug new drugs fail clinical trials.
Intelligent people make theories about how a drug is supposed to work and think it would help to cure some illness. Then when the drug is brought into clinical trials more than 90% of new drugs still fail to live up to the theoretical promise that the drug held.
A fun one which came up recently on IRC: everyone thinks that how your parents raise you is incredibly important, this is so obvious it doesn’t need any proof and is universal common sense (how could influencing and teaching a person from scratch to 18 years old not have deep and profound effects on them?), and you can find extended discussions of the best way to raise kids from Plato’s Republic to Rousseau’s Emile to Spock.
Except twin studies consistently estimate that the influence of ‘shared environment’ (the home) is small or near-zero for many traits compared to genetics and randomness/nonshared-environment.
If you want to predict whether someone will be a smoker or smart, it doesn’t matter whether they’re raised by smokers or not (to borrow an example from The Nurture Assumption*); it just matters whether their biological parents were smokers and whether they get unlucky.
This is so deeply counterintuitive and unexpected that even people who are generally familiar with the relevant topics like IQ or twin studies typically don’t know about this or disbelieve it.
(Another example is probably folk physics: Newtonian motion is true, experimentally confirmed, mathematically logical, and completely unintuitive and took millennia to be developed after the start of mechanics.)
* Rich’s citation is to Rowe 1994, The Limits of Family Influence: Genes, Experience, and Behavior; from pg204:
This is quite possibly the most comforting scientific result ever for me as a parent, by the way.
Whereas for me, it’s horrifying, given that my ex-spouse turned out to be an astonishingly horrible person.
I seem to recall Yvain posting a link to something he referred to as the beginnings of a possible rebuttal to The Nurture Assumption; I suppose I shall have to hang my hopes on that.
It may or may not be comforting to reflect that your ex-spouse is probably less horrible than s/he seems to you. (Just on general outside-view principles; I have no knowledge of your situation or your ex.)
You feared more than you hoped, eh?
Old epi jungle saying: “the causal null is generally true.”
‘Shh, kemo sabe—you hear that?’ ‘No; the jungle is silent tonight.’ ‘Yes. The silence of the p-values. A wild publication bias stalks us. We must be cautious’.
What is IRC?
Get off my lawn
http://en.wikipedia.org/wiki/Internet_relay_chat
So… does it mean that it’s completely irrelevant who adopted Harry Potter, because the results would be the same anyway?
Or is the correct model something like: abuse can change things to worse, but any non-abusive parenting simply means the child will grow up determined by their genes? That is, we have a biologically set “destiny”, and all the environment can do is either help us reach this destiny or somehow cripple us halfway (by abuse, by lack of nutrition, etc.).
In an home environment within the normal range for a population, the home environment will matter little in a predictable sense on many traits compared to the genetic legacy, and random events/choices/biological-events/accidents/etc. There are some traits it will matter a lot on, and in a causal sense, the home environment may determine various important outcomes but not in a way that is predictable or easily measured. The other category of ‘nonshared environment’ is often bigger than the genetic legacy, so speaking of a biologically set destiny is misleading: biologically influenced would be a better phrase.
Has this been demonstrated for home environments in the developing world or sub-middle class home environments in the developed world? My prior understanding was that it had not been.
There are serious restriction of range problems with the literature. I believe that there is one small French adoption study with unrestricted range which produced 1 sigma IQ difference between the bottom and top buckets (deciles?) of adopting families.
I wonder if this what Shalizi alludes to when he says that IQ is closer to that of the adoptive parents than that of the biological parents.
Christiane Capron & Michel Duyme (1989), “Assessment of effects of socio-economic status on IQ in a full cross-fostering study”, Nature, 340, 552-554
Christiane Capron & Michel Duyme (1996), “Effect of Socioeconomic Status of Biological and Adoptive Parents on WISC-R Subtest Scores of their French Adopted Children”, Intelligence, 22, 259-275
(Both references describe the same study.) Capron & Duyme found 38 French children placed for adoption before age 2, 20 of them to parents with very high socioeconomic status (operationalized as having 14-23 years of education and working a profession) and 18 to parents with very low socioeconomic status (unskilled & semi-skilled labourers or farmers, with 5-8 years of education). When the kids took the WISC-R IQ test, those adopted into the high-SES families had a mean IQ of 111.6, while those in the low-SES families had a mean IQ of 100.0, for a difference of 0.77 sigma.
Thanks!
In the context of IQ I’ve seen it claimed that normal variation in parenting doesn’t do much, but extreme abuse can still have a substantial effect. So parenting quality would only make a difference at the tails of the parenting quality distribution, but there it would make quite a difference.
In “No Two Alike” Harris argues that the biggest non-shared environment personality determinant is peer group. So Harry Potter style “Lock him up in a closet with no friends” would actually have a huge effect.
And it should be noted that parents do have control over peer group: where to live, public school vs. private school vs. homeschooling, getting children to join things, etc. So parenting still matters even if it’s all down to genetics and non-shared environment.
Also, has anyone investigated whether the proper response to publicized social-science answers/theories/whatever you want to call them is to assume they’re true or just wait for them to be rejected? That is: how many publicized social-science answers [the same question could be asked for diet-advice answers conflicting with pre-nutrition-studies received wisdom, etc.] were later rejected? It could well be that the right thing to do in general is stick with common sense...
Exactly! If you have something to protect as a parent, then after hearing “parents are unimportant, the important stuff is some non-genetic X” the obvious reaction is: “Okay, so how can I influence X?” (Instead of saying: “Okay, then it’s not my fault, whatever.”)
For example, if I want my children to be non-smokers, and I learn that whether I am smoking or not has much smaller impact than whether my children’s friends are smoking… the obvious next question is: What can I do to increase the probability that my children’s friends will be non-smokers? There are many indirect methods like choosing the place to live, choosing the school, choosing free-time activities, etc. I would just like to have more data on what smoking correlates with; where should I send my children and where should I prevent them from going, so that even if they “naturally” pick their peer group in that place, they will more likely pick non-smokers. (Replace non-smoking with whatever is your parenting goal.)
Shortly, when I read “parenting” in a study, I mentally translate it as: “what an average, non-strategic parent does”. That’s not the same as: “what a parent could do”.
Fictional evidence, etc. Also, HPMOR has confounders, like a differing mechanism for Horcruxes.
As Protagoras points out here there are systematic problems with twin studies.
There are problems, but I don’t think they are large, I think they are brought up mostly for ideological reasons (Shalizi is not an unbiased source and has a very big axe to grind), and a lot of the problems also cut the other way. For example, measurement error can reduce estimates of heritability a great deal, as we see in twin studies which correct for it and as predicted get higher heritability estimates, like “Not by Twins Alone: Using the Extended Family Design to Investigate Genetic Influence on Political Beliefs”, Hatemi et al 2010 (this study, incidentally, also addresses the claim that twins may have special environments compared to their non-twin siblings and that will bias results, which has been claimed by people who dislike twin studies; there’s no a priori reason to think this, and Hatemi finds no evidence for it, yet they had claimed it).
Do you mean more by this than that he has very strong opinions on this topic? I would guess you do—that you mean there’s something pushing him towards the opinions he has, that isn’t the way it is because those opinions are right. But what?
Shalizi is somewhere around Marxism in politics. This makes his writings on intelligence very frustrating, but on the other hand, it also means he can write very interesting things on economics at times—his essay on Red Plenty is the most interesting thing I’ve ever seen on economics & computational complexity. Horses for courses.
Shalizi states at least part of his position as follows:
and on the same page says these things:
and
I have to say that none of this sounds very Marxist to me. Shalizi apparently finds revolutions dishonourable; the most notable attempts at (nominally) Marxist states, the USSR and the PRC, he criticizes in very strong terms; he wants most prices to be set by markets (at least this is how I interpret what he says on that page and others it links to).
Oh, here’s another bit of evidence:
followed in the next paragraph by
which seems to me to imply, in particular, that Shalizi doesn’t consider himself “a Marxist, even a revisionist one”.
He’s certainly a leftist, certainly considers himself a socialist, but he seems quite some way from Marxism. (And further still from, e.g., any position taken by the USSR or the PRC.)
How about this?
Not that I think pigeon-holing him is very useful for determining his views on economics or politics, let alone IQ.
Suggests that Marxism is an idea Shalizi is “receptive to” but not (at least to me) that he’s actually a Marxist as such.
Does having political views that approximate Marxism imply irrationally-derived views on intelligence? I don’t see why it should, but this may simply be a matter of ignorance or oversight on my part.
I am not an expert on Marx but would be unsurprised to hear that he made a bunch of claims that are ill-supported by evidence and have strong implications about intelligence—say, that The Proletariat is in no way inferior in capabilities, even statistically, to The Bourgeoisie. But to me “somewhere around Marxism in politics” doesn’t mean any kind of commitment to believing everything Marx wrote. It isn’t obvious to me why someone couldn’t hold pretty much any halfway-reasonable opinions about intelligence, while still thinking that it is morally preferable for workers to own the businesses they work for and the equipment they use, that we would collectively be better off with much much more redistribution of wealth than we currently have (or even with the outright abolition of individual property), etc.
In another comment I’ve given my reasons for doubting that Shalizi is even “somewhere around Marxism in politics”. But even if I’m wrong about that, I’m not aware of prior commitments he has that would make him unable to think rationally about intelligence.
Of course it needn’t be a matter of prior commitments as such. It could, e.g., be that he is immersed in generally-very-leftist thought (this being either a cause or a consequence of his own leftishness), and that since for whatever reason there’s substantial correlation between being a leftist and having one set of views about intelligence rather than another, Shalizi has just absorbed a typically-leftist position on intelligence by osmosis. But, again, the fact that he could have doesn’t mean he actually has.
I think the guts of what you’re claiming is: Shalizi’s views on intelligence are a consequence of his political views; either his political views are not arrived at rationally, or the way his political views have given rise to his views on intelligence are not rational, or both. -- That could well be true, but so far what you’ve given evidence for is simply that he holds one particular set of political views. How do you get from there to the stronger claim about the relationship between his views on the two topics?
At least part of it was reading his ‘Statistical Myth’ essay, being skeptical of the apparent argument for some of the reasons Dalliard would lay out at length years later, reading all the positive discussions of it by people I was unsure understood either psychometrics or Shalizi’s essay (which he helpfully links), and then reading a followup dialogue http://vserver1.cscs.lsa.umich.edu/~crshalizi/weblog/495.html where—at least, this is how it reads to me—he carefully covers his ass, walks back his claims, and quietly concedes a lot of key points. At that point, I started to seriously wonder if Shalizi could be trusted on this topic; his constant invocation of Stephen Jay Gould (who should be infamous by this point) and his gullible swallowing of ‘deliberate practice’ as more important than any other factor which since has been pretty convincingly debunked (both on display in the dialogue) merely reinforce my impression and the link to Gould (Shalizi’s chief comment on Gould’s Mismeasure of Man is apparently solely “I do not recommend this for the simple reason that I read it in 1988, when I was fourteen. I remember it as a very good book, for whatever that’s worth.”; no word on whether he is bothered by Gould’s fraud) suggests it’s partially ideological. Another revealing page: http://vserver1.cscs.lsa.umich.edu/~crshalizi/notebooks/iq.html I can understand disrecommending Rushton, but disrecommending Jensen who invented a lot of the field and whose foes even admire him? Recommending a journalist from 1922? Recommending some priming bullshit? (Where’s the fierce methodologist statistician when you need him...?) There’s one consistent criterion he applies: if it’s against IQ and anything to do with it, he recommends it, and if it’s for it, he disrecommends it. Apparently only foes of it ever have any of the truth.
Informative. Thanks! Though I must admit that my reaction to the pages of Shalizi that you cite isn’t the same as yours.
I believe his political views are somewhere between way to the left of the Democratic Party and socialism. He dislikes the entire field of intelligence research in psychology because it’s ideologically inconvenient. He criticises anything that he can find to criticise about it. Think of him as Stephen Jay Gould, but much smarter and more honest.
See, this is a place where the US is different from Europe. Because over here (at least in the bit of Europe I’m in), being “somewhere to the right of socialism” isn’t thought of as the kind of crazy extremism that ipso facto makes someone dangerously biased and axe-grindy.
Now, of course politics is what it is, and affiliation with even the most moderate and reasonable political position can make otherwise sensible people completely blind to what’s obvious to others. So the fact that being almost (but not quite) a socialist looks to me like a perfectly normal and sensible position is perfectly compatible with Shalizi being made nuts by it. But to me “he’s somewhere to the left of Barack Obama” doesn’t look on its own like something that makes someone a biased source and explains what their problem is.
Being an extremist by local standards may be more relevant than actual beliefs.
Yup, that’s a good point. (Though it depends on what “local” means. I have the impression that academics in the US tend to be leftier than the population at large.)
Academia in the US is much leftier than the population at large. I believe it was Jonathan Haidt who went looking for examples of social conservatives in his field and people kept nomimating Philip Tetlock who would not describe himself thus. At a conference Dr.Haidt was looking for a show of hands for various political positions. Republicans were substantially less popular than Communists. Psychology is about as left wing as sociology and disciplines vary but academia is a great deal to the left of the US general population.
I’d generalize that to something like
collecting published results in medicine, psychology, epidemiology & economics journals gives an unbiased idea of the sizes of the effects they report
which is wrong at least twice over (publication bias and correlation-causation confusion) but is, I suspect, an implicit assumption made by lots of people who only made it to the first stage of traditional rationality (and reason along the lines of “normal people are full of crap, scientists are smarter and do SCIENCE!, so all I need to do to be correct is regurgitate what I find in scientific journals”).
Then don’t.
I point is more that if you only have theory and no empiric evidence, then it’s likely that you are wrong. That doesn’t mean that having a bit of empiric evidence automatically means that you are right.
I also would put more emphasis on having empiric feedback loops than at scientific publications. Publications are just one way of feedback. There a lot to be learned about psychology by really paying attention on other people with whom you interact.
If I interact with a person who has a phobia of spider and solve the issue and afterwards put a spider on his arm and the person doesn’t freak out, I have my empiric feedback. I don’t need a paper to tell me that the person doesn’t have a phobia anymore.
Yes, I agree. To clarify, I was neither condoning the belief in my bullet point, nor accusing you of believing it. I just wanted to tip my hat to you for inspiring my example with yours.
Ah, okay.
If a plane is on a conveyor belt going at the same speed in the opposite direction, will it take off?
I remember reading this in other places I don’t remember, and it seems to inspire furious arguments despite being non-political and not very controversial.
That reminds me of the question of whether hot water freezes faster than cold water.
That’s a great example. If I recall, people who get worked up about it generally feel that the answer is obvious and the other side is stupid for not understanding the argument.
Same speed with respect to what? This sound kind of like the tree-in-a-forest one.
As I remember the problem, the plane’s wheels are supposed to be frictionless so that their rotation is uncoupled from the rest of the plane’s motion. Hence the speed of the conveyor belt is irrelevant and the plane always takes off. Now, if you had a helicopter on a turntable...
What I mean is, on hearing that I thought of a conveyor belt whose top surface was moving at a speed -x with respect to the air, and a plane on top of it moving at a speed x with respect to the top of the conveyor belt, i.e. the plane was stationary with respect to the air. But on reading the Snopes link what was actually meant was that the conveyor belt was moving at speed -x and the plane’s engines were working as hard as needed to move at speed x on stationary ground with no wind.
While at the same time the rolling speed of the plane, which is the sum of it’s forward movement and the speed of the treadmill, is supposed to be equal to the speed of the treadmill. Which is impossible if the plane moves forward.
I’m not sure what you mean by “rolling speed of the plane”, “it’s forward movement”, and “speed of the treadmill”. The phrase “rolling speed” sounds like it refers to the component of the plane’s forward motion due to the turning of its wheels, but that’s not a coherent thing to talk about if one accepts my assumption that the wheels are uncoupled from the plane.
Rolling speed = how fast the wheels turn, described in terms of forward speed. So it’s the circumference of the wheels multiplied by their angular speed. And the wheels are not uncoupled from the plane they are driven by the plane. It was only assumed that the friction in the wheel bearings is irrelevant.
Forward movement of the plane = speed of the plane relative to something not on the treadmill. I guess I should have called it airspeed, which it would be if there is no wind.
Speed of the treadmill = how fast the surface of the treadmill moves.
And that is more time than I wanted to spend rehashing this old nonsense. The grandparent was only meant to explain why the great grandparent would not have settled the issue, not to settle it on its own. The only further comment I have is the whole thing is based on an unrealistic setup, which becomes incoherent if you assume that it is about real planes and real treadmills.
Fair enough. I have to chip in with one last comment, but you’ll be happy to hear it’s a self-correction! My comments don’t account for potential translational motion of the wheels, and they should’ve done. (The translational motion could matter if one assumes the wheels experience friction with the belt, even if there’s no internal wheel bearing friction.)
That’s different though. The Plane on a Treadmill started with somebody specifying some physically impossible conditions, and then the furious arguments were between people stating the implication of the stated conditions on one side and people talking about the real world on the other hand.
If your twin’s going away for 20 years to fly around space at close to the speed of light, they’ll be 20 years older when they come back.
A spinning gyroscope, when pushed, will react in a way that makes sense.
If another nation can’t do anything as well as your nation, there is no self-serving reason to trade with them.
You shouldn’t bother switching in the Monty Hall problem
The sun moves across the sky because it’s moving.
EDIT Corrected all statements to be false
I think you may have expressed this one the wrong way around; the way you’ve phrased it (“can make you better off”) is the surprising truth, not the surprising untruth.
They will. I think you mean: If your twin flies through space at close to the speed of light and arrives back 20 years later, they’ll be 20 years older when they come back. That one’s false.
Reversed polarity on a few statements. Thanks.
Your first statement is still correct.
To be more explicit: What is needed to make the statement interestingly wrong is for the two 20-year figures to be in different reference frames. If your twin does something for 20 years, then they will be 20 years older; but if they do something for what you experience as 20 years they may not be.
Edited to more firmly attach “for 20 years” to the earth.
Rephrased to more explicitly place “for 20 years” in the earth’s reference frame.
Would wrong scientific theories qualify? E.g. phlogiston or aether.
Downwind faster than the wind. See seven pages of posts here for examples of people getting it wrong.
Kant was famously wrong when he claimed that space had to be flat.
As discussed previously, this exact claim seems suspiciously absent from the first Critique.
I agree that Kant doesn’t seem to have ever considered non-euclidean geometry, and thus can’t really be said to be making an argument that space is flat. If we could drop an explanation of general relativity, he’d probably come to terms with it. On the other hand, he just assumes that two straight lines can only intersect once, and that this describes space, which seems pretty much what he was accused of.
I don’t see this in the quoted passage. He’s trying to illustrate the nature of propositions in geometry, and doesn’t appear to be arguing that the parallel postulate is universally true. “Take, for example,” is not exactly assertive.
Also, have a care: those two paragraphs are not consecutive in the Critique.
This isn’t very interesting, but I used to believe that the rules about checkmate didn’t really change the nature of chess. Some of the forbidden moves—moving into check, or failing to move out if possible—are always a mistake, so if you just played until someone captured the king, the game would only be different in cases where someone made an obvious mistake.
But if you can’t move, the game ends in stalemate. So forbidding you to move into check means that some games end in draws, where capture-the-king would have a victor.
(This is still armchair theorising on my part.)
Does it have to be something from the modern day? Because there are tons of historical examples.
There are many beliefs that people will arrive at through armchair theorizing, but only until they are corrected. If you came up with the idea that the Earth was flat a long time ago, nobody would correct you. If you did that today, someone would correct you; indeed, society is so full of round-Earth information that it’s hard for anyone to not have heard of the refutation before coming up with the idea, unless they’re a young child.
Does that count as something arrived at through armchair theorizing? People would, after all, come up with it by armchair theorizing if they lived in a vacuum. They did come up with it through armchair theorizing back when they did live in a vacuum.
That’s why there are tons of historical examples and not so many modern examples. A modern example has to be something where the refutation is well known by experts, but the refutation hasn’t made it down to the common person, because if the refutation did make it down to the common person that would inhibit them from coming up with the armchair theory in the first place.
(For historical examples,
It’s possible that the refutation is known by our experts, but not by contemporary experts, or
because of the bad state of mass communication in ancient times, the refutation simply hasn’t spread enough to reach most armchair theorists.)
Something from the modern day, yes. The people arriving at the naive belief, and the people with the ability to demonstrate its incorrect status, should coexist.
Sorry to keep going on this, but would looking at a historical example of a group of intelligent people arriving at a naive belief, even though there was plenty of evidence available at the time that this is a naive belief work?
Possibly, yes. I’d love to hear whatever you’ve got in mind.
The Conservative obsession with a non-existent link between abortion and breast cancer.
That hardly satisfies any of the desiderata! It’s political, controversial, and it’s hard to see how armchair reasoning would lead you to believe it.
Bell’s spaceship paradox.
According to Bell, he surveyed his colleagues at CERN (clearly a group of intelligent, qualified people) about this question, and most of them got it wrong. Although, to be fair, the conflict here is not between expert reasoning and domain knowledge, since the physicists at CERN presumably possessed all the knowledge you need (basic special relativity, really) to get the right answer.
When I was ~16, I came up with group selection to explain traits like altruism.
Generalising from ‘plane on a treadmill’; a lot of incorrect answers to physics problems and misconceptions of physics in general. For any given problem or phenomenon, one can guess a hundred different fake explanations, numbers, or outcomes using different combinations of passwords like ‘because of Newton’s Nth law’, ‘because of drag’, ‘because of air resistance’, ‘but this is unphysical so it must be false’ etc. For the vast majority of people, the only way to narrow down which explanations could be correct is to already know the answer or perform physical experiments, since most people don’t have a good enough physical intuition to know in advance what types of physical arguments go through, so should be in a state of epistemic learned helplessness with respect to physics.
I have a strange request. Without consulting some external source, can you please briefly define “learned helplessness” as you’ve used it in this context, and (privately, if you like) share it with me? I promise I’ll explain at some later date.
There will probably be holes and not quite capture exactly what I mean, but I’ll take a shot. Let me know if this is not rigorous or detailed enough and I’ll take another stab, or if you have any other follow-up. I have answered this immediately, without changing tab, so the only contamination is saccading my LW inbox beforing clicking through to your comment, the titles of other tabs, etc. which look (as one would expect) to be irrelevant.
Helplessness about topic X—One is not able to attain a knowably stable and confident opinion about X given the amount of effort one is prepared to put in or the limits of one’s knowledge or expertise etc. One’s lack of knowledge of X includes lack of knowledge about the kinds of arguments or methods that tend to work in X, lack of experience spotting crackpot or amateur claims about X, and lack of general knowledge of X that would allow one to notice one’s confusion at false basic claims and reject them. One is unable to distinguish between ballsy amateurs and experts.
Learned helplessness about X—The helplessness is learned from experience of X; much like the sheep in Animal Farm, one gets opinion whiplash on some matter of X that makes one realise that one knows so little about X that one can be argued into any opinion about it.
(This has ended up more like a bunch of arbitrary properties pointing to the sense of learned helplessness rather than a slick definition. Is it suitable for your purposes, or should I try harder to cut to the essence?)
Rant about learned helplessness in physics: Puzzles in physics, or challenges to predict the outcome of a situation or experiment, often seem like they have many different possible explanations leading to a variety of very different answers, with the merit of these explanations not being distinguishable except to those who have done lots of physics and seen lots of tricks, and maybe even then maybe you just need to already know the answer before you can pick the correct answer.
Moreover, one eventually learns that the explanations at a given level of physics instruction are probably technically wrong in that they are simplified (though I guess less so as one progresses).
Moreover moreover, one eventually becomes smart enough to see that the instructors do not actually even spot their leaps in logic. (For example, it never seemed to occur to any of my instructors that there’s no reason you can’t have negative wavenumbers when looking at wavefunctions in basic quantum. It turns out that when I run the numbers, everything rescales since the wavefunction bijects between -n and n and one normalizes the wavefunction anyway, so that it doesn’t matter, but one could only know this for sure after reasoning it out and justifying discarding the negative wavenumbers. It basically seemed like the instructors saw an ‘n’ in sin(n*pi/L) or whatever and their brain took it as a natural number without any cognitive reflection that the letter could have just as easily been a k or z or something, and to check that the notation was justified by the referent having to be a natural.)
Moreover, it takes a high level of philosophical ability to reason about physics thought experiments and their standards of proof. Take the ‘directly downwind faster than the wind’ problem. The argument goes back and forth, and, like the sheep, at every point the side that’s speaking seems to be winning. Terry Tao comes along and says it’s possible, and people link to videos of carts with propellers apparently going downwind faster than the wind and wheels with rubber bands attached allegedly proving it. But beyond deferring to his general hard sciences problem-solving ability, one has no inside view way to verify Tao’s solution; what are the standards of proof for a thought experiment? After all, maybe the contraptions in the video only work (assuming they do work as claimed, which isn’t assured) because of slight side-to-side effects rather than directly down wind or some other property of the test conditions implicitly forbidden by the thought experiment.
Since any physical experiment for a physics thought experiment will have additional variables, one needs some way to distinguish relevant and irrelevant variables. Is the thought experiment the limit as extraneous variables become negligible, or is there a discontinuity? What if different sets of variables give rise to different limits? How does anyone ever know what the ‘correct’ answer is to an idealised physics thought experiment of a situation that never actually arises? Etc.
Thanks for that. The whole response is interesting.
I ask because up until quite recently I was labouring under a wonky definition of “learned helplessness” that revolved around strategic self-handicapping.
An example would be people who foster a characteristic of technical incompetence, to the point where they refuse to click next-next-finish on a noddy software installer. Every time they exhibit their technical incompetence, they’re reinforced in this behaviour by someone taking the “hard” task away from them. Hence their “helplessness” is “learned”.
It wasn’t until recently that I came across an accurate definition in a book on reinforcement training. I’m pretty sure I’ve had “learned helplessness” in my lexicon for over a decade, and I’ve never seen it used in a context that challenged my definition, or used it in a way that aroused suspicion. It’s worth noting that I probably picked up my definition through observing feminist discussions. Trying a mental find-and-replace on ten years’ conversations is kind of weird.
I am also now bereft of a term for what I thought “learned helplessness” was. Analogous ideas come up in game theory, but there’s no snappy self-contained way available to me for expressing it.
Good chance you’ve seen both of these before, but:
http://en.wikipedia.org/wiki/Learned_helplessness and http://squid314.livejournal.com/350090.html
Damn, if only someone had created a thread for that, ho ho ho
Strategic incompetence?
I’m not sure if maybe Schelling uses a specific name (self-sabotage?) for that kind of thing?
Schelling does talk about strategic self-sabotage, but it captures a lot of deliberated behaviour that isn’t implied in my fake definition.
Also interesting to note, I have read that Epistemic Learned Helplessness blog entry before, and my fake definition is sufficiently consistent with it that it doesn’t stand out as obviously incorrect.
Now picturing a Venn diagram with three overlapping circles labelled “epistemic learned helplessness”, “what psychologists call ‘learned helplessness’”, and “what sixes_and_sevens calls ‘learned helplessness’”!
Making up a term for this...”reinforced helplessness”? (I dunno whether it’d generalize to cover the rest of what you formerly meant by “learned helplessness”.)
The sun revolves around the earth.
The earth revolving around the sun was also armchair reasoning, and refuted by empirical data like the lack of observable parallax of stars. Geocentrism is a pretty interesting historical example because of this: the Greeks reached the wrong conclusion with right arguments. Another example in the opposite direction: the Atomists were right about matter basically being divided up into very tiny discrete units moving in a void, but could you really say any of their armchair arguments about that were right?
It is not clear that the Greeks rejected heliocentrism at all, let alone any reason other than heresy. On the contrary, Hipparchus refused to choose, on the grounds of Galilean relativity.
The atomists got the atomic theory from the Brownian motion of dust in a beam of light. the same way that Einstein convinced the final holdouts thousands of years later.
Eh? I was under the impression that most of the Greeks accepted geocentrism, eg Aristotle. Double-checking https://en.wikipedia.org/wiki/Heliocentrism#Greek_and_Hellenistic_world and https://en.wikipedia.org/wiki/Ancient_Greek_astronomy I don’t see any support for your claim that heliocentrism was a respectable position and geocentrism wasn’t overwhelmingly dominant.
Cite? I don’t recall anything like that in the fragments of the Pre-socratics, whereas Eleatic arguments about Being are prominent.
Lucretius talks about the motion of dust in light, but he doesn’t claim that it is the origin of the theory. When I google “Leucippus dust light” I get lots of people making my claim and more respectable sources making weaker claims, like “According to traditional accounts the philosophical idea of simulacra is linked to Leucippus’ contemplation of a ray of light that made visible airborne dust,” but I don’t see any citations to where this tradition is recorded.
The Greeks cover hundreds of years. They made progress! You linked to a post about the supposed rejection of Aristarchus’s heliocentric theory. It’s true that no one before Aristarchus was heliocentric. That includes Aristotle who died when Aristarchus was 12. Everyone agrees that the Hellenistic Greeks who followed Aristotle were much better at astronomy than the Classical Greeks. The question is whether the Hellenistic Greeks accepted Aristarchus’s theory, particularly Archimedes, Apollonius, and Hipparchus. But while lots of writings of Aristotle remain, practically nothing of the later astronomers remain.
It’s true that secondary sources agree that Archimedes, Apollonius, and Hipparchus were geocentric. However, they give no evidence for this. Try the scholarly article cited in the post you linked. It’s called “The Greek Heliocentric Theory and Its Abandonment” but it didn’t convince me that there was an abandonment. That’s where I got the claim about Hipparchus refusing to choose.
I didn’t claim that there was any evidence that it was respectable, let alone dominant, only that there was no evidence that it was rejected. The only solid evidence one way or the other is the only surviving Hellenistic astronomy paper, Archimedes’s Sandreckoner, which uses Aristarchus’s model. I don’t claim that Archimedes was heliocentric, but that sure sounds to me like he respected heliocentrism.
Maybe heliocentrism survived a century and was finally rejected by Hipparchus. That’s a world of difference from saying that Seleucus was his only follower. Or maybe it was just the two of them, but we live in a state of profound ignorance.
As for the ultimate trajectory of Greek science, that is a difficult problem. Lucio Russo suggests that Roman science is all mangled Greek science and proposes to extract the original. For example, Seneca claims that the retrograde motion of the planets is an illusion, which sounds like he’s quoting someone who thinks the Earth moves, even if he doesn’t. More colorful are Pliny and Vitruvius who claim that the retrograde motion of the planets is due to the sun shooting triangles at them. This is clearly a heliocausal theory, even if the authors claim to be geocentric. Less clear is Ruso’s interpretation, that this is a description of a textbook diagram that they don’t understand.
So, you just have an argument from silence that heliocentrism was not clearly rejected?
I just read through the bits of Sand Reckoner referring to Aristarchus (Mendell’s translation), and throughout Archimedes seems to be at pains to distance himself from Aristarchus’s model, treating it as a minority view (emphasis added):
Not language which suggests he takes it particularly seriously, much less endorses it.
In fact, it seems that the only reason Archimedes brings up Aristarchus at all is as a form of ‘worst-case analysis’: some fools doubt the power of mathematics and numbers, but Archimedes will show that even under the most ludicrously inflated estimate of the size of the universe (one implied by Aristarchus’s heliocentric model), he can still calculate & count the number of grains of sands it would take to fill it up; hence, he can certainly calculate & count the number for something smaller like the Earth. From the same chapter:
And he triumphantly concludes in ch4:
All I have ever said is that you should stop telling fairy tales about why the Greeks rejected heliocenrism. If the Sandreckoner convinces you that Archimedes rejected heliocentrism, fine, whatever, but it sure doesn’t talk about parallax.
I listed several pieces of positive evidence, but I’m not interested in the argument.
The Sand Reckoner implies the parallax objection when it uses an extremely large heliocentric universe! Lack of parallax is the only reason for such extravagance. Or was there some other reason Aristarchus’s model had to imply a universe lightyears in extent...?
Aristarchus using a large universe is evidence that he thought about parallax. It is not evidence that his opponents thought about parallax.
You are making a circular argument: you say that the Greeks rejected heliocentrism for a good reason because they invoked parallax, but you say that they invoked parallax because you assume that they had a good reason.
There is a contemporary recorded reason for rejecting Aristarchus: heresy. There is also a (good) reason recorded by Ptolemy 400 years later, namely wind speed.
Uh… why would the creator of the system consider parallax an issue, and the critics not consider parallax an issue?
And you still haven’t addressed my quotes from The Sand Reckoner indicating Archimedes considered heliocentrism dubious and a minority view, which should override your arguments from silence.
No. I said parallax is why they rejected it in part because to save the model one has to make the universe large, then you said ‘look! Archimedes uses a large universe!’, and I pointed out this is 100% predicted by the parallax-rejection theory. So what? Where is your alternate explanation of why the large-universe—did Archimedes just make shit up?
And how do these lead to a large universe...?
The very question is whether the critics made good arguments. You are assuming the conclusion.
People make stupid arguments all the time. Anaxagoras was prosecuted for heresy and Aristarchus may have been. How many critics of Copernicus knew that he was talking about what happens over the course of a year, not what happens over the course of a day?
Yes, Archimedes says that Aristarchus’s position is a minority. Not dubious. I do not see that in the quotes at all. Yes, Archimedes probably uses Aristarchus’s position for the purposes of worst-case analysis to get numbers as large as possible; indeed, they are larger than the numbers Ptolemy attributes to Aristarchus. As I said at the beginning, I do not claim that he endorsed heliocentrism, only that he considered it a live hypothesis.
One mystery is what is the purpose of the Sandreckoner. Is it just about large numbers? Or is it also about astronomy? Is Archimedes using exotic astronomy to justify his interest in exotic mathematics? Or is he using his public venue to promote diversity in astronomy?
It’s assuming the conclusion to think critics agreed with Aristarchus’s criticism of a naive heliocentric theory?
I disagree strongly. I don’t see how you could possibly read the parts I quoted, and italicized, and conclude otherwise. Like, how do you do that? How do you read those bits and read it as anything else? What exactly is going through your head when you read those bits from Sand Reckoner, how do you parse it?
Gee, if only I had quoted the opening and ending bits of the Sand Reckoner where Archimedes explained his goal...
Many people object to Copernicus on the grounds that Joshua made the Sun stand still, or on grounds of wind, without seeming to realize that they object to the daily rotation of the Earth, not to his special suggestion of the yearly revolution of the Earth about the Sun.
If Copernicus had such lousy critics, why assume Aristarchus had good critics who were aware of his arguments? Maybe they objected to heresy, like (maybe) Cleanthes.
Archimedes was a smart guy who understood what Aristarchus was saying. He seems to accept Aristarchus’s argument that heliocentrism implies a large universe. If (if!) he rejects the premise, that does not tell us why. Maybe because he rejects the conclusion. Or maybe he rejects the premise for completely different consequences, like wind. Or maybe he is not convinced by Aristarchus’s main argument (whatever that was) and doesn’t even bother to move on to the consequences.
Ptolemy does give a reason: he says wind. He has the drawback of being hundreds of years late, so maybe he is not representative, but at least he gives a reason. If you extract any reason, that is the one to pick.
The principal purpose of the Sandreckoner is to investigate infinity, to eliminate the realm of un-nameable numbers, thus to eliminate the confusion between un-nameably large and infinite. But there are many other choices that go into the contents, and they may be motivated by secondary purposes. Physical examples are good. Probably sand is a cliche. But why talk about astronomy at all? Why not stop at all the sand in the world? Or fill the sphere of the sun with sand, stopping at Aristarchus’s non-controversial calculation of that distance? Such choices are rarely explained. I offered two possibilities and the text does not distinguish them.
You have not explained why Aristarchus would make his universe so large if the criticisms were as bogus as some of Copernicus’s critics. Shits and giggles?
If he rejects heliocentrism, as he clearly does, it does not matter for your original argument why exactly.
You still have not addressed the quotes from Sand Reckoner I gave which clearly show Archimedes rejects heliocentrism and describes it as a minority rejected position and he only draws on Aristarchus as a worst-case a fortiori argument. Far from being a weak argument from silence (weak because while we lack a lot of material, I don’t think we lack so much material that they could have seriously maintained heliocentrism without us knowing; absence of evidence is evidence of absence), your chosen Sand Reckoner example shows the opposite.
If this is the best you can do, I see no reason to revise the usual historical scenario that heliocentrism was rejected because any version consistent with observations had absurd consequences.
Aristarchus made the universe big because he himself thought about parallax. Maybe some critic first made this objection to him, but such details are lost to time, and uninteresting to compared to the question of the response to the complete theory.
As to the rest, I abandon all hope of convincing you.
I ask only that any third parties read the whole exchange and not trust Gwern’s account of my claims.
Atoms can actually be divided into parts, so it’s not clear that the atomists where right. If you would tell some atomist about quantum states, I would doubt that they would find that to be a valid example of what they mean with “atom”.
The atomists were more right than the alternatives: the world is not made of continuously divisible bone substances, which are bone no matter how finely you divide them, nor is it continuous mixtures of fire or water or apeiron.
You could say the same of Dalton.
How about “human beings only use 10% of their brains”? Not political, not flamebait, but possibly also “a lot of people say it and sounds plausible” rather than armchair theorizing. “Everyone should drink eight glasses of water a day” is probably in the same category.
I looked through Wikipedia’s list of common misconceptions for anything that people might arise independently in lots of people through reasonable reflection, rather than just “facts” that sneak into the public consciousness, but none of them really qualify.
Of course, false “facts” can also easily sneak into less trafficked Wikipedia pages, such as its list of common misconceptions.
Perhaps “The person who came out of the teleporter isn’t me, because he’s not made of the same atoms”?
Why not also spend an equally amount of time searching for examples that prove the opposite of the point you’re trying to make? Or are you speaking to an audience that doesn’t agree this is possible in principle?
Edit: Might Newtonian physics be an example?
A thought I’ve had floating around for a few years now.
With the Internet, it’s a lot easier to self-study than ever before. This changes the landscape. Money is much less of a limiting factor, and things like time, motivation, and availability of learning material are now more important. It occurs to me that the last is greatly language-dependent. If the only language you speak is spoken by five million other people, you might as well not have the Internet at all. But even if you speak a major language, the material you’ll be getting is greatly inferior in quantity, and probably quality, to material available to English speakers. Just checking stats for Wikipedia, the English version is many times larger than other versions and scores much better on all indices. For newer things like MOOCS and Quora, the gap is even larger, and a counterpart often doesn’t even exist (Based on my experiences with Korean, my native language).
Could this spark a significant education gap between English speakers and non-speakers? Since learning through the web has only recently become competitive with traditional methods of learning, we shouldn’t expect to see the bulk of the effects for at least a decade or so.
Given that most of the important scientific papers are in English there already a gap between people who can speak English and people who don’t. I don’t think that you can get a good position in a Western business these days if you can’t speak any English.
I was thinking more in terms of nations. The top few percent of any country can already speak English and have all the resources necessary for learning. The education the rest get is largely determined by the quality of their country’s educational system. MOOCs disrupt this pattern.
I personally didn’t learn my English in the formal education system of Germany but on the internet.
I think that countries like Korea, China or Japan don’t really provide students with much free time to learn English on their own or use MOOCs.
That’s interesting. Would you say that your English ability is typical of what an intelligent German speaker could attain through the Internet?
For Koreans, learning English well enough to comfortably learn in it is extremely difficult short of living in an English speaking country for multiple years at a young age. I hear that the Japanese also have this problem.
I knew that it’s easier for speakers of European languages to learn English than for East Asian languages, but your ability is way above what I thought would be feasible without spending insane amounts of time on English.
If you are typical, well that explains why RichardKennaway belowmentioned choosing to learn English as if it were a minor thing. You see, I have this perception of English as a “really hard thing” that takes years to get mediocre at. And I believe this is the common view among East Asians.
I recall reading a news article that claimed that the difference between the kids who play a lot of video games and spend a lot of time on the English-speaking Internet, and the kids who do not, is very obvious in the English classes of most Finnish schools these days. Basically the avid gamers get top grades without even trying much.
My personal experience was similar—I learned very little English in school that I wouldn’t already have learned from video games, books, and the English-speaking Internet before that.
That said, this doesn’t contradict the “it takes years to become good” idea—it did take us years, we just had pretty much our entire childhoods to practice.
The important category is probably speakers of germanic languages; Italians and Russians probably don’t get as big of an advantage.
I strongly suspect that they’re still a lot better off than native speakers of (say) Mandarin or Korean or Japanese. To be more specific: I suspect German is somewhat better for this purpose than Italian, which in turn is substantially better than Russian, which in turn is substantially better than Hungarian, which in turn is substantially better than Mandarin.
English and German are both Germanic languages. They share a lot of structure and vocabulary and are written with more or less the same letters.
English and Italian are both languages with a lot of Latin in their heritage. They share some structure and a lot of vocabulary and are written with exactly the same letters.
English and Russian are both Indo-European languages with some classical heritage. They share some structure but rather little vocabulary, and their writing systems are closely related.
Hungarian is not Indo-European, but largely shares its writing system with English.
Mandarin is not Indo-European (and I think is decidedly further from Indo-European than Hungarian is). It works in a completely different way from English in many many ways, and has a radically (ha!) different writing system.
I would guess (but don’t know enough for my guess to be worth much) that the gap between Hungarian and Mandarin is substantially the largest of the ones above, and that one could find other languages that would slot into that gap while maintaining the “substantially better” progression.
Agreed.
I don’t think the writing system would account for that much of a difference, since learning the Latin Alphabet is something everybody is doing anyway, and it’s not much extra work (compared to grammar and vocabulary). I still suspect Hungarian-speakers might find English easier because of closer cultural assumptions and background.
I probably do spend insane amounts of time on the English internet. An amount of time that a Japanese student simply couldn’t because he’s to busy keeping up with the extensive school curriculum in the Japan. East Asians tend to spend a lot of time to drill children to perform well on standardized tests with doesn’t leave much time for things like learning English.
Another issue is that a lot of the language teaching of English in East Asia is simply highly inefficient. That will change with various internet elearning projects.
An outlier would be Singapore where as Wikipedia suggest: “The English language is now the most medium form of communication among students from primary school to university.”
I’ve seen them spend a lot of time drilling for standardized English tests, but those tests miss a lot of things, and quite a few students do well on those tests but can’t have a conversation in English. Or know what “staunch”, “bristle”, and “bulwark” mean, but not “bullshit”.
And ability to learn.
The greater the gap, the greater the incentive for non-speakers to narrow the gap by becoming speakers.
Yes, but only if the gap is known to exist.
I recently learned that chocolate contain significant amount of coffeine. 100g chocolate contain roughly as much as a cup of black tea. As a result I updated in the direction of not eating chocolate directly before going to bed.
I don’t know whether the information is new to everyone, but it was interesting for me.
Caffeine’s a strong drug for me, except I have a huge tolerance now because I consume so much coffee. One night a few years ago, after I had quit caffeine for about a month, I was picking away at a bag of chocolate almonds while doing homework, and after a few hours I noticed that I felt pretty much euphoric. So yeah, this is good info to have if you’re trying to get off caffeine.
Besides caffeine, there’s also theobromine.
FWIW, I did some reading of studies and it seems that kinds of tea vary too much in caffeine content for classifying by preparation method to be a meaningful indication of caffeine content, and there’s some question about how l-theanine plays a role. It’s probably better to say ‘a cup of tea’.
Here is some data on tea caffeine content.
Anecdotally, I know a person who drinks a lot of “regular” black tea (Ceylon/Assam), but doesn’t drink Darjeeling tea because it gets her jittery and too-much-caffeine-shaky.
Yeah, that was one of the studies I read on the topic. (The key part is “Caffeine concentrations in white, green, and black teas ranged from 14 to 61 mg per serving (6 or 8 oz) with no observable trend in caffeine concentration due to the variety of tea.”, although they bought mostly black teas and not many white/green or any oolongs; but the other studies don’t show a clear trend either.)
Did you see any data on natural variability—that is, compare the caffeine content in tea from two different bushes on the same planation; from different plantations (on different soils, different altitude, etc.)?
What makes tea white/green/oolong/black is just post-harvest thermal processing and it seems likely that the caffeine content is determined at the plant level.
Don’t think so. It’d be a good study to run, but a bit challenging: even if you buy from a specific plantation, I think they tend to blend or mix leaves from various bushes, so getting the leaves would be more of a challenge than normal.
I thought that they were also usually harvested at different times through the year?
You mean that tea intended to become, say, white, is harvested at different time than tea intended to become black? I don’t think that’s the case. As far as I know the major difference is what you harvest, but that expresses itself as quality of the tea, not whether it is white or oolong or black. For the top teas you harvest the bud at the tip of the branch and one or two immature leaves next to it (which often look silverish because of fine hairs on these leaves), such teas are known as “tippy”. Cheaper teas harvest full-grown leaves. There might well be the difference in caffeine content between the two, but it’s not a green/black difference, it’s a good tea vs lousy tea difference.
Darjeeling is unusual in that it has two specific harvesting seasons (called “first flush” and “second flush”) but both are used to make black (well, kinda-black) tea.
White tea is harvested early and immature. Black/oolong/green is a matter of post-processing.
White tea has huge variance in caffeine across varieties. Both tails of the distribution are white.
Can you provide a link for that assertion? The post-harvesting processing of white tea is quite different from that of green, not to mention black. Also, I believe that while white tea requires top-quality leaves (the bud + 1-2 young leaves) and other teas don’t, the top quality greens, oolongs, and blacks use the same “immature” leaves as white.
The average difference between different cups of tea are probably greater than the differences between different kinds of black tea. I don’t see how using a wider category is helpful for giving people an idea about how much caffeine a bar of chocolate happens to have.
A cup of black tea is an amount that the average person wouldn’t drink right before bed. If you have a better metric for given people a meaningful idea about the amount of caffeine in chocolate feel free to suggest one.
And I don’t see why you should make distinctions which don’t make a difference, and engage in false precision.
And they would drink a cup of white tea, green tea, or oolong tea right before bed?
I already did: ‘a cup of tea’.
There are various kind of herbal tea that don’t have any coffeine in them and I do drink them before going to bed.
Yes, but people don’t usually mean herbal teas or tisanes when they say ‘tea’.
That depends very much on the people with whom you interact.
Caffeinated tea, then?
100g of pure chocolate is a lot. I normally eat 25g of 85% chocolate. That’s probably an upper bound on a typical serving, diluted by other ingredients. For people who do not otherwise consume caffeine, it’s a powerful dose, but for people who drink coffee every morning, it’s probably not much.
Added: 25g of pure chocolate has about 10mg of caffeine, about the same as 25g of liquid coffee.
I’ve never tried to fnord something before, did I do it right?
Frankenstein’s monster doomsayers overwhelmed by Terminator’s Skynet become ever-more clever singularity singularity the technological singularity idea that has taken on a life of its own techno-utopians wealthy middle-aged men singularity as their best chance of immortality Singularitarians prepared to go to extremes to stay alive for long enough to benefit from a benevolent super-artificial intelligence a man-made god that grants transcendence doomsayers the techno-dystopians Apocalypsarians equally convinced super-intelligent AI no interest in curing cancer or old age or ending poverty malevolently or maybe just accidentally bring about the end of human civilisation Hollywood Golem Frankenstein’s monster Skynet and the Matrix fascinated by the old story man plays god and then things go horribly wrong singularity chain reaction even the smartest humans cannot possibly comprehend how it works out of control singularity technological singularity cautious and prepared optimistic obsessively worried by a hypothesised existential risk a sequence of big ifs risk while not impossible is improbable worrying unnecessarily we’re falling into a trap fallacy taking our eyes off other risks none of this has brought about the end of civilisation a huge gulf obsessing about the risk of super-intelligent AI cautious and prepared we should be worrying about present-day AI rather than future super-intelligent AI.
Artificial intelligence will not turn into a Frankenstein’s monster, Alan Winfield, Observer, Sunday 10 August 2014
Source, it’s from back in 2002
On the limits of rationality given flawed minds —
There is some fraction of the human species that suffers from florid delusions, due to schizophrenia, paraphrenia, mania, or other mental illnesses. Let’s call this fraction D. By a self-sampling assumption, any person has a D chance of being a person who is suffering from delusions. D is markedly greater than one in seven billion, since delusional disorders are reported; there is at least one living human suffering from delusions.
Given any sufficiently interesting set of priors, there are some possible beliefs that have a less than D chance of being true. For instance, Ptolemaic geocentrism seems to me to have a less than D chance of being true. So does the assertion “space aliens are intervening in my life to cause me suffering as an experiment.”
If I believe that a belief B has a < D chance of being true, and then I receive what I think is strong evidence supporting B, how can I distinguish the cases “B is true, despite my previous belief that it is quite unlikely” and “I have developed a delusional disorder, despite delusional disorders being quite rare”?
For you to rule out a belief (e.g. geocentrism) as totally unbelievable, not only does it have to be less likely than insanity, it has to be less likely than insanity that looks like rational evidence for geocentrism.
You can test yourself for other symptoms of delusions—and one might think “but I can be deluded about those too,” but you can think of it like requiring your insanity to be more and more specific and complicated, and therefore less likely.
The relevant number is probably not D (the fraction of people who suffer from delusions) but a smaller number D0 (the fraction of people who suffer from this particular kind of delusion). In fact, not D0 but the probably-larger-in-this-context number D1 (the fraction of people in situations like yours before this happened who suffer from the particular delusion in question).
On the other hand, something like the original D is also relevant: the fraction of people-like-you whose reasoning processes are disturbed in a way that would make you unable to evaluate the available evidence (including, e.g., your knowledge of D1) correctly.
Aside from those quibbles, some other things you can do (mostly already mentioned by others here):
Talk to other people whom you consider sane and sensible and intelligent.
Check your reasoning carefully. Pay particular attention to points about which you feel strong emotions.
Look for other signs of delusions.
Apply something resembling scientific method: look for explicitly checkable things that should be true if B and false if not-B, and check them.
Be aware that in the end one really can’t reliably distinguish delusions from not-delusions from the inside.
The simple answer is to ask someone else, or better yet a group; if D is small, then D^2 or D^4 will be infinitesimal. However, delusions are “infectious” (see Mass hysteria), so this is not really a good method unless you’re mostly isolated from the main population.
The more complicated answer is to track your beliefs and the evidence for each belief, and then when you get new evidence for a belief, add it to the old evidence and re-evaluate. For example, replacing an old wives’ tale with a peer-reviewed study is (usually) a no-brainer. On the other hand, if you have conflicting peer-reviewed studies, then your confidence in both should decrease and you should go back to the old wives’ tale (which, being old, is probably useful as a belief, regardless of truth value).
Finally, the defeatist answer is that you can’t actually distinguish that you are delusional. With the film Shutter Island in mind, I hope you can see that almost nothing is going to shake delusions; you’ll just rationalize them away regardless. If you keep notes on your beliefs, you’ll dismiss them as being written by someone else. People will either pander to your fantasy or be dismissed as crooks. Every day will be a new one, starting over from your deluded beliefs. In such a situation there’s not much hope for change.
For the record, I disagree with “delusional disorders being quite rare”; I believe D is somewhere between 0.5 and 0.8. Certainly, only 3% of these are “serious”, but I could fill a book with all of the ways people believe something that isn’t true.
Given replication rates of scientific studies a single study might not be enough. Single studies that go against your intuition are not enough reason to update. Especially if you only read the abstract.
No need to get people to wash their hands before you do a business deal with them.
Enough for what? My question is whether my hair stylist saying “Shaving makes the hair grow back thicker.” is more reliable than http://onlinelibrary.wiley.com/doi/10.1002/ar.1090370405/abstract. In general, the scientists have put more thought into their answer and have conducted actual experiments, so they are more reliable. I might revise that opinion if I find evidence of bias, such as a study being funded by a corporation that finds favorable results for their product, but in my line of life such studies are rare.
I find that in most cases I simply don’t have an intuition. What’s the population of India? I can’t tell you, I’d have to look it up. In the rare cases where I do have some idea of the answer, I can delve back into my memory and recreate the evidence for that idea, then combine it with the study; the update happens regardless of how much I trust the study. I suppose that a well-written anecdote might beat a low-powered statistical study, but again such cases are rare (more often than not they are studying two different phenomena).
I wash my hands after shaking theirs, as soon as convenient. Or else I just take some ibuprofen after I get sick. (Not certain what you were trying to get at here...)
Humans are biased to overrate bad human behavior as a cause for mistakes. The decent thing is to orient yourself on whether similar studies replicate.
Regardless every publish-or-perish paper has an inherent bias to find spectacular results.
Let’s say wearning red every day.
Thinking that those Israeli judges don’t give people parole because they don’t have enough sugar in their blood right before mealtime. Going and giving every judge a candy before hearing every case to make it fair isn’t warranted.
That’s fixable by training Fermi estimates.
It’s a reference to the controversy about whether washing your hands primes you to be more moral. It’s a experimental social science result that failed to replicate.
If a crocodile bites off your hand, it’s generally your fault. If the hurricane hits your house and kills you, it’s your fault for not evacuating fast enough. In general, most causes are attributed to humans, because that allows actually considering alternatives. If you just attributed everything to, say, God, then it doesn’t give any ideas. I take this a step further: everything is my fault. So if I hear about someone else doing something stupid, I try to figure out how I could have stopped them from doing it. My time and ability are limited in scope, so I usually conclude they were too far away to help (space-like separation), but this has given useful results on a few occasions (mostly when something I’m involved in goes wrong).
Not really, since the replication is more likely to fail than the original study (due to inexperience), and is subject to less peer-review scrutiny (because it’s a replication). See http://wjh.harvard.edu/~jmitchel/writing/failed_science.htm. The correct thing to consider is followup work of any kind; for example, if a researcher has a long line of publications all saying the same thing in different experiments, or if it’s widely cited as a building block of someone’s theory, or if there’s a book on it.
Right, people only publish their successes. There are so many failures that it’s not worth mentioning or considering them. But they don’t need to be “spectacular”, just successful. Perhaps you are confusing publishing at all, even in e.g. a blog post, with publishing in “prestigious” journals, which indeed only publish “spectacular” results; looking at only those would give you a biased view, certainly, but as soon as you expand your field of view to “all information everywhere” then that bias (mostly) goes away, and the real problem is finding anything at all.
So the study there links red to aggression; I don’t want to be aggressive all the time, so why should I wear red all the time? For example, I don’t want a red car because I don’t want to get pulled over by the cops all the time. Similarly for most results; they’re very limited in scope, of the form “if X then Y” or even “X associate with Y”. Many times, Y is irrelevant, so I don’t need to even consider X.
Sure, but if I’m involved with a case then I’ll be sure to try to get it heard after lunchtime, and offer the judge some candy if I can get away with it.
You can memorize populations or memorize the Fermi factors and how to combine them, but the point stands regardless; you still have to remember something.
Ah, social science. I need to take more courses in statistics before I can comment… so far I have been sticking to the biology/chemistry/physics side of things (where statistics are rare and the effects are obvious from inspection).
The car story appears to be a myth nowadays, but that could just be due to the increased use of radar guns and better police training. Radar guns were introduced around the 1950′s so all of their policemen quotes are too recent to tell.
Conflating whether or not you could do something to stop them with finding truth makes it harder to have an accurate view of whether or not the result is true.
Accepting reality for what it is helps to have an accurate perception of reality. Only once you understand the territory should you go out and try to change things. If you do the second step before the first you mess up your epistemology. You fall for a bunch of human biases evolved for finding out whether the neighboring tribe might attack your tribe that aren’t useful for clear understanding of todays complex world.
I spoke about incentives. Researchers have an incentive to publish in prestigious journals and optimize their research practices for doing so. The case with blogs isn’t much different. Successful bloggers write polarizing posts that get people talking and engage with the story even there would be a way to be more accurate and less polarizing. The incentives go towards “spectual”.
Scott H Young whom I respect and who’s a nice fellow wrote his post against spaced repetition and still know recommends now in a later post the usage of Anki for learning vocabulary.
It’s not about remembering it’s about being able to make estimates even when you aren’t sure. And you can calibrate your error intervals.
Aggression is not the central word. Status and dominance also appear. People do a bunch of things to appear higher status.
One of the studies in question suggested that it makes woman more attracted to you measured by the physical distance in conversation. Another one suggest that attraction based on photo ratings.
I actually did the comparison on hotOrNot. I tested a blue shirt against a red shirt. Photoshopped so nothing besides the color with different. For my photo blue scored more attractive than red despite the studies saying that red is the color that raises attractiveness.
The replication rates for cancer biology seem to be even worse than for psychology if you trust the Amgen researchers who could only replicate 6 of 55 landmark studies that they tried to replicate.
Probably a minor point, but were both the red and blue shirts photoshopped? If one of them was an actual photo, it might have looked more natural (color reflected on to your face) than the other.
In this case no, the blue was the original you are right that this might have screwed with the results. HotOrNot internal algorithms were also a bit opaque.
But to be fair the setup of the original study wasn’t natural either. The color in those studies has the color of the border of the photo.
If I wanted to repeat the experiment I would like to it on Amazon Mechanical turk. At the moment I don’t really have the spare money for projects like that but maybe someone else on LW cares enough to dress in an attractive way and wants to optimize and has the money.
The whole thing might also work good for a blogger willing to a bit of cash to write an interesting post.
Especially for online dating like Tinder, photo optimisation through empiric measurement of photos can increase success rates a bit.
I’m not certain where you see conflation. I have separate storage areas for things to think about, evidence, actions, and risk/reward evaluations. They interact as described here. Things I hear about go into the “things to think about” list.
The world is changing so I must too. If the apocalypse is tomorrow, I’m ready. I don’t need to “understand” the apocalypse or its cause to start preparing for it. IF I learn something later that says I did the wrong thing, so be it. I prefer spending most of my time trying to change things than sitting in a room all day trying to understand. Indeed, some understanding can only be gained through direct experience. So I disagree with you here.
The decision procedure I outlined above accounts for most biases; you’re welcome to suggest revisions or stuff I should read.
You didn’t, AFAICT; you spoke about “inherent biases”. I think my point still stands though; averaging over “all information everywhere” counteracts most perverse incentives, since perversion is rare, and the few incentives left are incentives that are shared among humans such as survival, reproduction, etc. In general humans are good at that sort of averaging, although of course there are timing and priming effects. Researchers/bloggers are incentivized to produce good results because good results are the most useful and interesting. Good results lead to good products or services (after a 30 year lag). The products/services lead to improved life (at least for some). Improved life leads to more free time and better research methods. And the cycle goes on, the end result AFAICT is a big database of mostly-correct information.
His post is entitled “Why Forgetting Can Be Good” and his mention of Anki is limited to “I’m skeptical of the value of an SRS for most domains of knowledge.” If he then recommends Anki for learning vocabulary, this changes relatively little; he’s simply found a knowledge domain where he found SRS useful. Different studies, different conclusions, different contributions to different decisions.
You’re never sure, so why mention “even when you aren’t sure”, since it’s implied? Striking that out…
Estimation comes after the evidence-gathering phase. If you have no evidence you can make no estimates. Fermi estimation is just another estimation method, so it doesn’t change this. If you have no memory, then you have no evidence. So it is about remembering. “Those who cannot remember the past are condemned to repeat it”.
If you have no estimates you can’t have error intervals either. Indeed, you can’t do calibration until you have a distribution of estimates.
It looks like the central word is definitely dominance. Stringing the top words into a sentence I get “Sports teams wear red to show dominance and it has an effect on referees’ performance”. I guess I was going off of the Mandrill story where signs of dominance are correlated with willingness to be aggressive. This study says dominance and threat are emphasized by wearing red, where “threat” is measured by “How threatening (intimidating, aggressive) did you feel?”. Some other papers also relate dominance to aggressiveness. So I feel comfortable confusing the two, since they seem to be strongly correlated and relatively flexible in terms of definition.
The comments do focus on status, so I guess you have a point. But I generally skip over the comments when an article is linked to. And the status discussion was in the comments of Overcoming Bias post, so by no means central.
Would you be referring to, among others, this study? Unfortunately… it still looks like experimental psychology, so again I have to plead lack of statistics.
I’ve mostly been reading Army / DoD studies, which have a different funding model. But I guess cancer will become relevant eventually (preferably later rather than sooner).
Side note: does LW have a “collapse threads more than N levels deep” feature like reddit? It probably should have triggered a few replies ago, so I didn’t post on the wrong child...
The problem is that you assume that know the relevant biases. There are often cases where you don’t know why someone screws up. There are domains where it’s easier to get knowledge about how much people screw up than understanding the reasons behind screwups.
Fear produces fight or flight responses. People often fight out of fear. Aggressiveness often comes out of weakness. A karate black belt is dominant but usually not aggressive. Taller people get payed more money because being tall is a signal for social dominance.
Yes.
Wikipedia has a list; I’ve checked a few of them, and the rest are on my TODO list. I have that page watched so if there’s a new bias I’ll know.
Information is produced regardless, and often recorded (see e.g. Gwern’s Mistakes page). So long as I myself don’t screw up, which, assuming that I always follow my decision procedure and my decision procedure is correct, I won’t, then it doesn’t matter.
OK, but I was talking about “perceived willingness to be aggressive” (signal), not aggression (action).
Someone wearing a black belt is probably going to be perceived as more aggressive, the same way someone idly cleaning their fingernails with a sharp knife might be. Similarly if a person adopts something recognized as a fighting stance. Not certain about tall people, that’s probably something else besides perceived aggressiveness, e.g. “My parents were rich and could feed me a lot”.
This has gone on long enough that it might be worth summarizing into a post… do you want to write it or should I?
There not good evidence for the claim that reading a list of a bunch of biases improves your decision making ability. See Eliezers discussion on the hindsight bias: http://lesswrong.com/lw/il/hindsight_bias/
I’m not so much talking about actually wearing the black belt but the psychological changes that the kind of training that makes people a black belt creates. Changes in confidence and body language.
We went through many separate points and at the moment I don’t know how to pull them in a good way together into one post. If you see a decent way feel free.
I checked that the procedure accounts for the biases. Hindsight bias is avoided by computing uncertainty using a regression analysis. Availability bias is avoided by using a large database with random sampling. Etc. I haven’t gone through all of them, but so far the biases I’ve looked at can’t affect the decision outcome because the human isn’t directly involved in those stages of computation.
And there’s even a study on black uniforms that shows they increase perceived aggression.
This page says martial arts training increases dominance, as you say. On the other hand, that study also says that martial arts training decreases (observed) aggression. This study says perceived aggressiveness is highly correlated with proportion of mixed-martial-arts fights won, which I interpret as also meaning that martial arts training increases perceived aggression before a fight (since martial training ought to result in winning more martial arts fights). So it looks like martial arts training encourages controlling the aggressiveness signal, suppressing it in some non-fighting cases and enhancing it in competition. Or else the actual aggression levels decreased because the willingness to fight was communicated more clearly and thus people chose to fight less because their estimates of the costs rose.
My general writing strategy is as follows: I go through source material, write down all the quotes/facts that seem useful into a bullet list, then sort alphabetically, then reorder and group the bullets, then rewrite the sub-bullets into paragraphs, then reorder the paragraphs, then remove the list formatting and add paragraph formatting, then add a title and introduction. (The conclusion is just more facts/quotes). I’ve practiced this on a couple of my required-because-core essays and they’ve gotten reasonable marks (B+ / A- level depending on how nice the teacher is).
In most social situations aggressiveness is bad. A woman doesn’t want an aggressive boyfriend. But she usually want that her boyfriend isn’t low status without any amount of dominance.
If you sit in school it’s good if your teacher is dominant but aggression is not a sign of a good teacher.
People don’t make clear estimates of costs when in high pressure situations. Instead fight/flight/freeze reactions trigger. Martial arts training removes that trigger and instead allows it’s participants to make more conscious decisions about whether to fight. Being able to make conscious decisions often leads to less fights.
Then I’m happy to see the outcome in this case.
What sort of beliefs are you talking about here? Are you classifying simply being wrong about something as a “delusional disorder”?
Exhibiting symptoms often considered as signs of mental illness. For example, this says 38.6% of general people have hallucinations. This says 40% of general people had paranoid thoughts. Presumably these groups aren’t exactly the same, so there you go: between 0.5 and 0.8 of the general population. You can probably pull together some more studies with similar results for other symptoms.
The basic idea is to talk about your belief in detail with a trusted friend that you consider sane.
Writing your own thought processes down in a diary also helps to be better able to evaluate it.
There is a common idea in the “critical thinking”/”traditional rationality” community that (roughly) you should, when exposed to an argument, either identify a problem with it or come to believe the argument’s conclusion. From a Bayesian framework, however, this idea seems clearly flawed. When presented with an argument for a certain conclusion, my failure to spot a flaw in the argument might be explained by either the argument’s being sound or by my inability to identify flawed arguments. So the degree to which I should update in either direction depends on my corresponding prior beliefs. In particular, if I have independent evidence that the argument’s conclusion is false and that my skills for detecting flaws in arguments are imperfect, it seems perfectly legitimate to say, “Look, your argument appears sound to me, but given what I know, both about the matter at hand and about my own cognitive abilities, it is much more likely that there’s a flaw in your argument which I cannot detect than that its conclusion is true.” Yet it is extremely rare to see LW folk or other rationalists say things like this. Why is this so?
Because the case where you are entirely wedded to a particular conclusion and want to just ignore the contrary evidence would look awfully similar...
Awfully similar, but not identical.
In the first case, you have independent evidence that the conclusion is false, so you’re basically saying “If I considered your arguments in isolation, I would be convinced of your conclusion, but here are several pieces of external evidence which contradict your conclusion. I trust this external evidence more than I trust my ability to evaluate arguments.”
In the second case, you’re saying “I have already concluded that your conclusion is false because I have concluded that mine is true. I think it’s more likely that there is a flaw in your conclusion that I can’t detect than that there is a flaw in the reasoning that led to my conclusion.”
The person in the first case is far more likely to respond with “I don’t know” in response to the question of “So what do you think the real answer is, then?” In our culture (both outside, and, to a lesser but still significant degree inside LW), there is a stigma against arguing against a hypothesis without providing an alternative hypothesis. An exception is the argument of the form “If Y is true, how do you explain X?” which is quite common. Unfortunately, this form of argument is used extensively by people who are, as you say, entirely wedded to a particular conclusion, so using it makes you seem like one of those people and therefore less credible, especially in the eyes of LWers.
Rereading your comment, I see that there are two ways to interpret it. The first is “Rationalists do not use this form of argument because it makes them look like people who are wedded to a particular conclusion.” The second is “Rationalists do not use this form of argument because it is flawed—they see that anyone who is wedded to a particular conclusion can use it to avoid updating on evidence.” I agree with the first interpretation, but not the second—that form of argument can be valid, but reduces the credibility of the person using it in the eyes of other rationalists.
“Independent evidence” is a tricky concept. Since we are talking Bayesianism here, at the moment you’re rejecting the argument it’s not evidence any more, it’s part of your prior. Maybe there was evidence in the past that you’ve updated on, but when you refuse to accept the argument, you’re refusing to accept it solely on the basis of your prior.
Which is pretty much equivalent to saying “I have seen evidence that your conclusion is false, so I already updated that it is false and my position is true and that’s why I reject your argument”.
I think both apply.
In fact that case is just a special case of the former with you having bad priors.
Not quite, your priors might be good. We’re talking here about ignoring evidence and that’s a separate issue from whether your priors are adequate or not.
This idea seems like a manifestation of epistemic learned helplessness.
I say things like this a lot in contexts where I know there are experts, but I have put no effort into learning which are the reliable ones. So when someone asserts something about (a) nutritional science (b) Biblical translation nuances (c) assorted other things in this category, I tend to say, “I really don’t have the relevant background to evaluate your argument, and it’s not a field I’m planning to do the legwork to understand very well.”
In my experience there are LW people who would in such cases simply declare that they won’t be convinced of the topic at hand and suggest to change the subject.
I particularly remember a conversation at the LW community camp about geopolitics where a person simply declared that they aren’t able to evaluate arguments on the matter and therefore won’t be convinced.
That was probably me. I don’t think I handled the situation particularly gracefully, but I really didn’t want to continue that conversation, and I couldn’t see whether the person in question was wearing a crocker’s rules tag.
I don’t remember my actual words, but I think I wasn’t trying to go for “nothing could possibly convince me”, so much as “nothing said in this conversation could convince me”.
It’s still more graceful than the “I think you are wrong based on my heuristics but I can’t tell you where you are wrong” that Pablo Stafforini advocates.
Because that ends the discussion. I think a lot of people around here just enjoy debating arguments (certainly I do).
I actually do say things like this pretty frequently, though I haven’t had the opportunity to do so on LW yet.
A similar situation that used to happen frequently to me in real life, was when the argument was too long, too complex, used information that I couldn’t verify… or ever could, but the verification would take a lot of time… something like: “There is this 1000 pages long book containing complex philosophical arguments and information from non-mainstream but cited sources, which totally proves that my religion is correct.” And there is nothing obviously incorrect within the first five pages. But I am certainly not going to read it all. And the other person tries to use my self-image of an intelligent person against me, insisting that I should promise that I will read the whole book and then debate about it (which is supposedly the rational thing to do in such situation: hey, here is the evidence, you just refuse to look at it), or else I am not really intelligent.
And in such situations I just waved my hands and said—well, I guess you just have to consider me unintelligent—and went away.
I didn’t think about how to formalize this properly. It was just this: I recognize the trap, and refuse to walk inside. If it happened to me these days, I could probably try explaining my reaction in Bayesian terms, but it would be still socially awkward. I mean, in the case of religion, the true answer would show that I believe my opponent is either dishonest or stupid (which is why I expect him to give me false arguments); which is not a nice thing to say to people. And yeah, it seems similar to ignoring evidence for irrational reasons.
Nothing, including rationality, requires you to look at ALL evidence that you could possibly access. Among other things, your time is both finite and valuable.
Related link: Peter van Inwagen’s article Is it wrong everywhere, always, and for everyone, to believe anything on insufficient evidence?. van Inwagen suggests not, on the grounds that if it were then no philosopher could ever continue believing something firmly when there are other smarter equally well informed philosophers who strongly disagree. I find this argument less compelling than van Inwagen does.
Haha. You should believe exactly what the evidence suggests, and exactly to the degree that it suggests it. The argument is also an amusing example of ‘one man’s modus ponens...’.
Quoted in full from here:
I see the broad point Waytz is making, but the ranty delivery is pretty silly. Why is the doctor’s act not selfless? It certainly appears to be motivated by altruism (even if that altruism is misguided, from a utilitarian perspective). Having a non-utilitarian moral code is not the same thing as selfishness.
Second, the anger in that comment seems to have more to do with a distaste for deontological altruistic gestures than anything else. I really doubt Waytz would be as mad if the doctor had simply decided that he had had enough of working in the medical profession and decided to open a bistro instead.
Not sure if this belongs here, but not sure where else it should go.
Many pages on the internet disappear, returning 404′s when looking for them (especially older pages). The material I found on LW and OB is of such great quality that I would really hate it if a part of the pages here also disappeared (as in became harder to access for me). I am not sure if this is in any part realistic, but the thought does bother me. So I was hoping to somehow make a local backup of LW/OB, downloading all pages to a hard drive. There are other reasons for wanting this same thing: I am frequently in regions without internet access, and also this might finally allow me to organise the posts (the categories on LW leave much to be desired, the closest thing to a good structure I found is the chronological list on OB, which seems to be absent on LW?).
So my triple question: should I be worried about pages disappearing (probably not too much), would it still be a good idea to try to make a local backup (probably yes, storage is cheap and I think it would be useful for me personally to have LW offline, even only the older posts) and how does one go about this?
You might be interested in reading Gwern’s page on Archiving URLs and Link Rot
Pages here are disappearing—someone’s been going through the archive deleting posts they don’t like. (c.f. [1] versus [2].) (The post is still slightly available, but the 152 comments are no longer associated with it.) So get archiving sooner rather than later.
New open thread
How to Work with “Stupid” People
The hypothesis is that people frequently underestimate the intelligence of those they work with. The article suggests some ways people could get the wrong impression, and some strategies for improving communications and relationships. It all seems very plausible.
However, the author doesn’t offer any examples, and the comments are full of complaints about unchangeably stupid coworkers.
I believe I had the opposite problem most of my life. I was taught to be humble, to never believe I am better than anyone else, et cetera. Nice political slogans, and probably I should publicly pretend to believe it. But there is a problem that I have a lot of data of people doing stupid things, and I need some explanation. And of course, if I forbid myself to use the potentially correct explanation, then I am pushing myself towards the incorrect ones.
Sometimes the problem is that I didn’t understand something, so the seemingly stupid behavior wasn’t actually stupid, it was me not understanding something. Yes, sometimes this happens, so it is reasonable to consider this hypothesis seriously. But oftentimes, even after careful exploration, the stupid behavior is stupid. When people keep saying that 2+2=5, it could mean they have secret mathematical knowledge unknown to you; but it is more likely that they are simply wrong.
But the worse problem is that refusing to believe in other people’s stupidity deprives you of wisdom of “Never attribute to malice that which is adequately explained by stupidity.” Not believing in stupidity can make you paranoid, because if those people don’t do stupid things because of stupidity, then they must have some purpose doing it. And if it’s a stupid thing that happens to harm you, it means they hate you, or at least don’t mind when you are harmed. Ignorance starts to seem like strategical plausible deniability.
I had to overcome my upbringing and say to myself: “Viliam, your IQ is at least four sigma over the average, so when many people seem retarded to you, even many university-educated people, that’s because they really are retarded, compared with you. They are usually not passively aggressive; they are trying to do their best, their best is just often very unimpressive to you (but probably impressive in their own eyes, and in eyes of their peers). You are expecting from them more than they can realistically provide; and they often even don’t understand what you are saying. And they live in their world, where they are the norm; you are the exception. And it will never change, so you better get used to it, otherwise you prepare yourself for a lifetime of disappointment.”
From that moment, when I see someone doing something stupid, I consider a hypothesis “maybe that’s the best their intelligence allows them to do”. And suddenly, I am not angry at most people around me. They are nice people, they are just not my equals, and it’s not their fault. Often they have a knowledge that I don’t have, and I can learn from them. (Intelligence does not equal knowledge.) But also, they often do something completely stupid that likely doesn’t seem stupid in their eyes. I should not assume that everything they do makes sense. I should not expect them to able to understand everything I am trying to explain; I can try, but I shouldn’t become too involved in it; sometimes I have to give up and accept some stupidity as a part of my environment.
The proper way to work with stupid people is to realize their limitations and don’t blame them for not being what you want them to be. (Of course you should always check whether your estimates are correct. But they are not always wrong.)
That blog post assumes that actual stupidity is never the “real” problem. I beg to disagree.
Or does it?
This seems to mean exactly “maybe they are stupid after all”, but expressed using a different set of words.
(I would guess that the author at some point adopted “never think that someone is stupid” as a deontological rule, and then unintentionally evolved a different set of words to be able to think about stupidity without triggering the filter...)
You’re right. I’m sure that actual stupidity is sometimes the real problem. On the other hand, it would surprise me if it’s always the real problem. At that point, the question becomes how much effort is worth putting in.
I think purely from a fundamental attribution error point of view we should expect the average “stupid” person we encounter to be less stupid than they seem.
(which is not to say stupidity doesn’t exist of course, just that we might tend to overestimate its prevalence)
I guess the other question would be, are there any biases that might lead us to underestimate someone’s stupidity? Illusion of transparency, perhaps, or the halo effect? I still think we’re on net biased against thinking other people are as smart as us.
Sex appeal, of course :-D
Are you saying that charlatans and cranks don’t exist or at least never manage to obtain any followers?
I have been considering finding a group of writers/artists to associate with in order to both provide me a catalyst for self-improvement and a set of peers who are serious about their work. I have several friends who are “into” writing or comics or whatever other medium, but most of them are as “into” it as the time between video games, drinking, and staying up late to binge Dexter episodes allows.
We have a whole sequences here on LessWrong about the Craft and the Community. So I don’t feel the need to provide some bits of anecdotal evidence for why I think having a community for your craft is a good idea.
Instead, I’ll just ask, to the writers: how have you found a community for your craft/have you bothered?
I put writing online for free and siphoned off spare HPMoR fans until I had enough fanbase to maintain my own stable of beta readers, set of tumblr tags, and modestly populated forum. This is more how I cultivated a fandom than a set of colleagues, but some of the people I collected this way also cowrite with me and most of them are available to spur me along.
I was once part of an online community on sffworld writing forum. There were regular posters like on any forum and there was also a small workshop (6-8 people) and each week two people would submit something for the rest of the group to read and provide feedback on. It was motivating and fun.
I frequent a sci-fi fan club in my city and from that group emerged a tiny writing workshop (6 members currently). The couple of guys who came up with the idea had heard that I wrote some small stuff and won a local contest, and thus I got invited. Every two Sundays we meet via Skype to comment on the stories that we’ve posted to our FB group since the last meeting. It has been helpful for me; we’ve agreed to be brutally honest with one another.
As a person living very far away from west Africa, how worried should I be about the current Ebola outbreak?
(Not in any way an expert; just going by what I’ve heard elsewhere.) I think the answer probably depends substantially on how much you care about the welfare of West Africans. It is very unlikely to have any impact to speak of in the US or Western Europe, for instance.
No, You’re Not Going To Get Ebola
Sorry, realized I don’t feel comfortable commenting on such a high-profile topic. Will wait a few minutes and then delete this comment (just to make sure there are no replies.)
TL;DR: Ebola is very hard to transmit person to person. Don’t think flu, think STDs.
Ebola isn’t airborne, so breathing the same air, being on the same plane as an Ebola case will not give you Ebola. It doesn’t spread quite like STDs, but it does require getting an infected person’s bodily fluids (urine, semen, blood, and vomit) mixed up in your bodily fluids or in contact with a mucous membrane.
So, don’t sex up your recently returned Peace Corps friend who’s been feeling a little fluish, and you should be a-ok.
A person infected with Ebola is very contagious during the period they are showing symptoms. The CDC recommends casual contact and droplet precautions.
Note the following description of (casual) contact:
(Much more contagious than an STD.)
But Lumifer is also correct. People without symptoms are not contagious, and people with symptoms are conspicuous (e.g. Patrick Sawyer was very conspicuous when he infected staff and healthcare workers in Nigeria) and unlikely to be ambulatory. The probability of a given person in West Africa being infected is very small (2000 cases divided by approximately 20 million people in Guinea, Sierra Leone and Liberia) and the probability of a given person outside this area being infected is truly negligible. If we cannot contain the virus in the area, there will be a lot of time between the observation of a burning ‘ember’ (or 10 or 20) and any change in these probabilities—plenty of time to handle and douse out any further hotspots that form.
The worst case scenario in my mind is that it continues unchecked in West Africa or takes hold in more underdeveloped countries. This scenario would mean more unacceptable suffering and would also mean the outbreak gets harder and harder to squash and contain, increasing the risk to all countries.
We need to douse it while it is relatively small—I feel so frustrated when I hear there are hospitals in these regions without supplies such as protective gear. What is the problem? Rich countries should be dropping supplies already.
Um. Given that an epidemic is actually happening and given that more than one doctor attending Ebola patients got infected, I’m not sure that “very hard” is the right term here.
Having said that, if you don’t live in West Africa your chances of getting Ebola are pretty close to zero. You should be much more afraid of lightning strikes, for example.
Non-conventional thinking here, feel free to tell me why this is wrong/stupid/dangerous.
I am young and healthy, and when I catch a cold, I think ” cool, when I recover immune system +1.” I take this one step further though, when I don’t get sick for a long time, I start to hope I get sick because I want to exercise my immune system. I know this might sound obviously wrong but can we just discuss why exactly?
My priors tell me that actively avoiding any germs and people to prevent getting sick is unhealthy. So I have lived my life not avoiding germs but also not asking people to cough on me either. But is there room to optimize? I caught something pretty nasty that lasted a month, and I am sure I got it from being at a large music festival breathing hot breathy air, but better now than catching that strain of what ever it was, when I am 70 right? And I don’t mean I want to catch a serious case of pneumonia and potentially die, I mean what if there was a way to catch a strain of the common cold every now and then deliberately.
There are over 100 strains of the common cold. If you gain immunity to one, this will not significantly decrease your chance of catching a cold in the far future. On the other hand, good hygiene will significantly decrease your chance of being infected by most contagious diseases.
It’s at least plausible that people become less vulnerable to colds as they get older.
http://www.nytimes.com/2013/08/06/science/can-immunity-to-the-common-cold-come-with-age.html?_r=0
He’s not talking about gaining immunity in the vaccination sense. He’s talking about developing a better, stronger immune system.
Maybe, but I don’t think you can find out—the data is too noisy and the variance is too big.
Besides, of course, the better your immune system gets, the more rarely will you get sick with infectious diseases...
The catch I’d expect here is for the marginal immunological benefit from an extra cold to be less than the marginal cost of suffering an extra cold, although a priori I’m not sure which way a cost-benefit analysis would go.
It’d depend on how well colds help your immune system fight other diseases; the expected marginal number of colds prevented per extra cold suffered; the risk of longer-term side effects of colds; how the cost of getting sick changes with age (which you mentioned); the chance that you’ll mistakenly catch something else (like influenza) if you try to catch someone else’s cold; and the doloric cost of suffering through a cold. One might have to trawl through epidemiology papers to put usable numbers on these.
Consuming probiotics (or even specks of dirt picked up from the ground) might be easier & safer.
Your immune system is already being subjected to constant demands by the simple fact that you don’t live in a quarantine bunker. Let it do its job. Intentional germ-seeking is reckless.
Thought that people (particularly in the UK) might be interested to see this, a blog from one of the broadsheets on Bostrom’s Superintelligence
http://blogs.telegraph.co.uk/news/tomchiversscience/100282568/a-robot-thats-smarter-than-us-theres-one-big-problem-with-that/
Another attempt at a sleep sensor, currently funded on Kickstarter.
Another piece of potentially useful information that may be new to some folks here: sleeping more ~7.5 hours is associated to a higher mortality risk (and the risk is comparable to sleeping less than ~5 hours).
Relevant literature reviews:
Cappuccio FP, D’Elia L, Strazzullo P, et al. Sleep duration and all-cause mortality: a systematic review and meta-analysis of prospective studies. Sleep 2010;33(5):585-592.
Grandner MA, Hale L, Moore M, et al . Mortality associated with short sleep duration: the evidence, the possible mechanisms, and the future. Sleep Med Rev 2010;14(3):191-203.
Grandner MA, Drummond SP. Who are the long sleepers? Towards an understanding of the mortality relationship. Sleep Med Rev. Oct 2007;11(5):341–60.
I don’t find these results to be of much value. There’s a long history of various sleep-duration correlations turning out to be confounds from various diseases and conditions (as your quote discusses), so there’s more than usual reason to minimize the possibility of causation, and if you do that, why would anyone care about the results? I don’t think a predictive relationship is much good for say retirement planning or diagnosing your health from your measured sleep. And on the other hand, there’s plenty of experimental studies on sleep deprivation, chronic or acute, affecting mental and physical health, which overrides these extremely dubious correlates. It’s not a fair fight.
Yes, my primary reason for posting these studies was actually to elicit a discussion about the kinds of conclusions we may or may not be entitled to draw from them (though I failed to make this clear in my original comment). I would like to have a better epistemic framework for drawing inferences from correlational studies, and it is unclear to me whether the sheer (apparent) poor track-record of correlational studies when assessed in light of subsequent experiments is enough to dismiss them altogether as sources of evidence for causal hypotheses. And if we do accept that sometimes correlational studies are evidentially causally relevant, can we identify an explicit set of conditions that need to obtain for that to be the case, or are these grounds so elusive that we can only rely on subjective judgment and intuition?
Based on that data, I think a blanket suggestion that everybody should sleep 8 hours isn’t warranted. It seems that some people with illnesses or who are exposed to other stressors need 8 hours.
I would advocate that everybody sleeps enough to be fully rested instead of trying to sleep a specific number of hours that some authority considers to be right for the average person.
I think the same goes for daily water consumption. Optimize values like that in a way that makes you feel good on a daily basis instead of targeting a value that seems to be optimal for the average person.
What are your grounds for making this recommendation? The parallel suggestion that everyone should eat enough to feel fully satisfied doesn’t seem like a recipe for optimal health, so why think things should be different with sleep? Indeed, the analogy between food and sleep is drawn explicitly in one of the papers I cited, and it seems that a “wisdom of nature” heuristic (due to “changed tradeoffs”; see Bostrom & Sandberg, sect. 2) might support a policy of moderation in both food and sleep. Although this is all admittedly very speculative.
Years of thinking about the issue that aren’t easily compressed.
In general alarm clocks don’t seem to be healthy devices. The idea of habitually breaking sleep at a random point of the sleep circle doesn’t seem good.
Let’s say we look at a person who needs 8 hours of sleep to feel fully rested. The person has health issue X. When we solve X than they only need 7 hours of sleep. The obvious way isn’t to wake up the person after 7 hours of sleep but to actually fix X.
That idea of sleep seems to both reflect the research that forcibly cutting peoples sleep in a way that leads to sleep deprivation is bad. It also explains why the people who sleep 8 hours on average die earlier than the people who sleep 7 hours.
If I get a cold my body needs additional sleep during that time. I have a hard time imagine that cutting that sleep needs away is healthy.
If we look at eating I also think similar things are true. There not much evidence that forced dieting is healthy. Fixing underlying issues seems to be preferable over forcibly limiting food consumption.
While we are at the topic of sleep and mortality it’s worth pointing out that sleeping pills are very harmful to health.
What it means to be statistically educated, a list by the American Statistical Association. Not half bad.
Anybody have any advice on how to successfully implement doublethink?
Once upon a time I tried using what I could coin “quicklists”. I took a receipt, turned it over to the back (clear side), and jotted down 5-10 things that I wanted to believe. Then I set a timer for 24 hours and, before that time elapsed, acted as if I believed those things. My experiment was too successful; by the time 24 hours were up I had ended up in a different county, with little recollection of what I’d been doing, and some policemen asking me pointed questions. (I don’t believe any drugs were involved, just sleep deprivation, but I can’t say for certain).
More recently, I rented and saw the film Memento, which explores these techniques in a fictional setting. The concept of short-term forgetting seemed reasonable and the techniques the character uses to work around it are easily adapted in real life. My initial test involved printing out a pamphlet with some dentistry stuff in tiny type (7 12-pt pages shrunk to fit on front-back of 1 page, folded in quarters), and carrying it with me to my dentist appointment. I was able to discuss most of the things from my pamphlet, and it did seem that the level of conversation was raised, but there were many other variables as well so it’s hard to quantify the exact effect.
I’m not certain these techniques actually count as “doublethink”, since the contradiction is between my “internal” beliefs and the beliefs I wrote down, but it does allow some exploration of the possibilities beyond rationality. I can override my system 2 with a piece of paper, and then system 1 follows.
NB: Retrieving your original beliefs after you’ve been going off of the ones from the paper is left as an exercise to the student
I would like to read more about this. Would you consider writing it up?
I thought I had written all I could. What sort of things should I add?
I think a little more elaboration on the quicklists experiment would be appreciated, and in particular a clearer description of what you think transpired when it went “too right”. For me, at least, your experimental outcome might be extremely surprising (depending on the extent of the sleep deprivation involved), but I’m not even sure yet what model I should be re-assessing.
I’ve been looking for tools to help organize complex arguments and systems into diagrams, and ran into Flying Logic and Southbeach modeller. Could anyone here with experience using these comment on their value?
I don’t have experience with those, but I’ll recommend Graphviz as a free (and useful) alternative. See e.g. http://k0s.org/mozilla/workflow.svg
And UnBBayes does computational analyses, similar to Flying Logic, except it uses Bayesian probability.
Suppose you wanted to find out all the correlates for particular Big Five personality traits. Where would you look, besides the General Social Survey?
Would ‘Google Scholar’ be too glib an answer here?
It gave me mostly psychological and physiological correlates. I’m interested more in behavioral and social/economic things. I suppose you can get from the former to the latter, though with much less confidence than a directly observed correlation.
Your answer is exactly as glib as it should be, but only because I didn’t really specify what I’m curious about.
I’m at Otakon 2014, and there was a panel today about philosophy and videogames. The description read like Less Wrongese. I couldn’t get in (it was full) but I’m wondering if anyone here was responsible for it.
Is there a way to see if I can vote both ways?
A month or so ago I started to get errors saying I can’t downvote. I don’t really care that much (it’s not me that’s gaining from my vote), but if I can’t downvote I want to make sure I don’t upvote so I don’t bias things.
Your downvotes are limited by your karma (I think it’s four downvotes to a karma point). I don’t think you will meaningfully bias anything if you continue to upvote things you like while accumulating enough karma to downvote again.
Yeah it’s the principle. I guess I’ll just try a down before I up going forward. Thanks Al
That they are, even when everything works perfectly. There was also an error a while ago that gave the same error message to (some?) people who were not at their limit.
I had those too. It stopped rather quickly.
Anchoring in marathon runners.
That’s a pretty cool histogram in figure 2.
correct link.pdf)
Oh, dear.
Harry Potter And The Cryptocurrency of Stars
What is the general opinion on neurofeedback? Apparently there is scientific evidence pointing to its efficacy, but have there been controlled studies showing greater benefit to neurofeedback over traditional methods if they are known?
I have done a lot of neurofeedback. It’s more of an art than a science right now. I think there have been many studies that have shown some benefit, although I don’t know if any are long-term. But the studies might not be of much value since there is so much variation in treatment since it is supposed to be customized for your brain. The first step is going to a neurofeedback provider and having him or her look at your qEEG to see how your brain differs from a typical persons’ brain. Ideally for treatment, you would say I have this problem, and the provider would say, yes this is due to your having … and with 20 sessions we can probably improve you. Although I am not a medical doctor, I would strongly advise anyone who can afford it to try neurufeedback before they try drugs such as anti-depressants.
Does anyone have any experience or thoughts regarding Cal Newport’s “Study Hacks” blog, or his books? I’m trying to get an idea of how reliable his advice is before, saying, reading his book about college, or reading all of the blog archives.
Some LW discussions of his books: A summary and broad points of agreement and disagreement with Cal Newport’s book on high school extracurriculars, Book Review: So Good They Can’t Ignore You, by Cal Newport, The failed simulation effect and its implications for the optimization of extracurricular activities.
Cognito Mentoring refer to him a fair bit, and often in mild agreement. Check their blog and wiki.
A history of anime fandom
I’m not vouching for this, but it sounds plausible.
Physics puzzle: Being exposed to cold air while the wind is blowing causes more heat loss/feels colder than simply being exposed to still cold air.
So, if the ambient air temperature is above body temperature, and ignoring the effects of evaporation, would a high wind cause more heat gain/feel warmer than still hot air?
Yes, though ignoring the effects of evaporation is ignoring a major factor.
Yes, it’s how hair dryers work.
Yes. Your body would try to cool your face exposed to hot air by circulating more blood through it, creating a temperature gradient through the surface layer. Consequently, the air nearest your face would be colder than ambient. A wind would blow away the cooler air, resulting in the air with ambient temperature touching your skin. Of course, in reality humidity and sweating are major factors, negating the above analysis.
Yes. This happens sometimes in a really wet Sauna.
But conditions in which you actually feel this also kill you in less than a day. You need to lose about 100 W of heat in order to keep a stable body temperature, and moving air only feels hotter than still air if you are gaining heat from the air.
Scicast: I mentioned this last open thread, but it was late in the month and got buried. Who here participates on scicast? I’m there under this name. It would be good to get a tally of how much LW prescience there is and how we as a group are doing. So if you’re there, sound off
Has anyone tried to watch “Atheist TV”? https://atheists.org/atheistTV/live
I’ve joked that you would have trouble following the programming because the shows would start and stop suddenly through random chance. ; )
Seriously, I hope it doesn’t run “atheist porn” about Madalyn O’Hair’s alleged greatness, an opinion of her legacy I don’t happen to share. I’ve read several accounts of her life which show how badly a mess she made of it, leading up to her abduction and murder by a violent career criminal named David Waters whom she had hired for her American Atheists’ organization, and then managed to piss off somehow. Madalyn’s younger son, the atheist activist Jon Murray, and her granddaughter (Jon’s niece) Robin all lived together, and they all died at the hands of Waters and his accomplices.
Despite my efforts to bring this up on atheist forums, apparently Madalyn’s fans don’t want to discuss the weirdness of her family situation. Madalyn in her 1965 Playboy interview says that she thought girls should become sexually active as early as 13, and the boys at 15, and that religious superstition interfered with normal sexual development and fulfillment. Yet she kept her younger son Jon from moving out of her house, and she reportedly ran off the only known girlfriend Jon ever had (it remains unknown if Jon ever had his sexual debut with any woman); so Jon, the atheist, up through his murder at age 40 lived like a sexually abstinent christian or something, and quite possibly died a virgin.
If “atheism” makes it easier to become sexually self-actualized, a belief even many christians hold in a back-handed way, then Jon must have really sucked at the task of living as an atheist, despite having the example of America’s best known atheist in the latter 20th Century as his mother.
Now, if some fringe christian obsessive like Fred Phelps had a 40 year old son who never moved away from home and apparently never had a girlfriend, atheists would draw conclusions from the situation which support their prejudices about the sex-negativity of certain kinds of christian belief. Why, look at what religion did to this poor fellow!
Notice the title of this article:
$7,060,259,674,497.51--Federal Debt Up $7 Trillion Under Obama http://www.cnsnews.com/news/article/terence-p-jeffrey/706025967449751-federal-debt-7t-under-obama
A Modern Monetary Theorist would look at the other side of the ledger and write, “U.S. Dollar Assets Held by Non-Federal Entities Up $7 Trillion Under Obama.” And he or she wouldn’t necessarily consider this outcome catastrophic or even harmful.
The Federal Debt seems to track the build up in retirement assets, for example: http://research.stlouisfed.org/fred2/graph/?g=mzu.
Because users of the U.S. dollar live in a closed financial system, the dollars in those assets have to come from the Federal Debt because they literally can’t from anywhere else.