It’s funny time now in Slovakia; as if someone declared a call: “Irrational people of all beliefs, unite!”
It started two years ago with the so-called “Gorilla scandal”. (TL;DR: Not a real gorilla, just a nickname of some criminal who was investigated by the secret service. By wiretapping his house the investigation revealed that almost all of our political parties, both left and right, participated in economical crime, cooperating with the same small group of people. The transcripts of the investigation were leaked to internet.) It was followed by a few demonstrations, after which pretty much nothing happened. Realizing that most media in our country actually belong to people involved in the scandal, so they don’t have an incentive to investigate and report on the scandal, an internet radio called “the free broadcast” was created. From that point, it gradually went downhill.
By deciding to focus on ‘news that don’t have place in the official media’, the radio was gradually selecting for hoaxes, conspiracy theories, etc. Which probably led to saner people leaving the radio, concentrating the irrationality of the remaining ones. One year later, it was mostly about how vaccination causes autism and how pharmaceutical companies want to prevent you from using MMS. Two years later, it seems to be mostly about how freemasonic and homosexual conspiracies are ruling the world, which is why we need to make a revolution and create a direct democracy.
Meanwhile, a new religious cult called Magnificat was created by a few excommunicated Catholic priests; or perhaps they joined an already existing cult and brought it here, I am not sure. A few years ago there were some documents about social service in some other countries (mostly Britain) abusing their powers and taking children away from non-abusive families to offer them for adoption to other people, for money. This Magnificat cult is spreading news that this is all a part of world-wide conspiracy against the traditional family, led by atheists, homosexuals and people hating the Virgin Mary. Recently this cult has registered as a political party, and participated in the recent municipal elections (although I am not aware of any significant victories). They also organized a large protest “for life and traditional family”. When our Catholic church saw them stealing its main agenda, it doubled its efforts and last week it published a new pastoral letter criticizing the “culture of death”, “gender ideology” and “the sins of Sodom”.
Meanwhile, a local neo-nazi movement become strong enough to win the municipal election in one of our counties (luckily not the one I live in). Their main agenda is fighting against the corrupted democratic politicians and parasitic Roma minority. They are very popular on “the free broadcast” internet radio, together with Magnificat; at this moment they seem to support each other, at least memetically. This combination of signalling contrarianism and yet appealing to common prejudice, seems very attractive to a lot of people. These days you can’t have an online discussion without someone explaining what a brainwashed sheep you are for not believing in them.
A month ago I burned some of my social capital by publishing a blog about how the Catholic church should stop giving tacit approval to the neo-nazis. (And by “tacit approval” I mean things like the former arch-bishop organizing a private mass for the local neo-nazis and blessing their leader. Which was later described by the church speakers as simply his private affair, which is not anyone else’s business.) So far it doesn’t seem like there was any benefit from it, except for me feeling better for having spoken my mind openly.
I feel like surrounded by complete idiots. To be honest, I always had this feeling, but recently it became very intensive.
Seems to me that irrational people have the advantage that they can relatively easily join their powers. An irrational person mostly cares about one thing; a rational person cares about many things. Suppose that you have a person A believing that people are manipulated by space aliens; a person B believing that vaccination causes autism; and a person C believing that it’s all about homosexuals trying to destroy the traditional family. Technically, none of them contradicts the others. And if you succeed to create a complex theory containing all the necessary components (the space aliens are controlling the humankind by giving more political power to homosexuals, which use their power to destroy the traditional family by using vaccination to cause more autism), you already have three strong believers. And more people mean more political power! Meanwhile the rational person will disagree with A and B and C, and remain without any allies. The ability of an irrational person to accept a compatible irrational belief is popularly called “having an open mind”.
I used to wish that people be more interested in how society works, go outside of their homes and try to improve things. After seeing this, I just wish they all lost interest, returned home, a started watching some sitcoms.
I wasn’t sure whether this largely political comment was okay to write on LW, but then I realized LW is pretty much the only place I know where I could write such comment without receiving verbal abuse, racist comments, explanations that homosexuality really is the greatest danger of our civilization, or offended complaints about how I am insensitive towards religion. Recently, LW feels like an island of sanity in the vast ocean of madness.
Perhaps this will give me more energy to promote rationality in my country. I already arranged another LW meetup after a few months pause.
Martin Odersky, the inventor of the Scala programming language, writes regarding a recent rant against Scala publicized on Hacker News:
Seems hardly a weekend goes by these days without another Scala rant that makes the Hacker news frontpage. [...]
There certainly seems to be a grand coalition of people who want to attack Scala. Since this has been going on for a while, and the points of critique are usually somewhere between unbalanced and ridiculous, I have been curious why this is. I mean you can find things that suck (by some definition of “suck”) in any language, why is everybody attacking Scala? Why do you not see articles of Rubyists attacking Python or of Haskellers attacking Clojure?
The quotation is remarkable for its absolute lack of awareness of selection bias. Odersky doesn’t appear to even consider the possibility that he might be noticing the anti-Scala rants more readily than rants against other programming languages. Not having considered the possibility of the bias, he has no chance to try and correct for it. The wildly distorted impression he’s formed leads him to language bordering on conspiracy theories (“grand coalition of people who want to attack Scala”).
As someone who regularly reads Hacker News and other forums where such attacks are discussed, I have noticed a few widely discussed blog posts against Scala in the last few years, but there hasn’t been a flood of them, nor do they seem unusually frequent compared to other languages. All languages Odersky’s named are regularly dissed. This anti-Ruby-on-Rails rant alone has been much more widely publicized than all of the anti-Scala stuff put together.
Odersky is incredibly smart and accomplished. My point is the pervasiveness of selection bias, and the importance of being aware of it consciously. The quoted passages amazed me because I assumed someone in his position would know this.
I think if you read what he wrote less ungenerously (e.g. as if you were reading a mailing list post rather than something intended as a bulletproof philosophical argument), you’ll see that his implicit point—that he’s just talking about the reaction to Scala in particular—is clear enough, and—and this is the important point—the eventual discussion is productive in terms of bringing up ideas for making Scala more suitable for its intended audience. Given that his post inspired just the sort of discussion he was after, I do think you’re being a bit harsh on him.
I don’t know that we disagree. I will cheerfully agree that Martin’s email was relatively measured, the discussion it kicked off was productive, and that his tone was neither bitter nor toxic. That doesn’t detract from my point—that as far as I can make out, his perception of relative attack frequency is heavily selection-biased, and he’s unaware of this danger. It is true that in this case the bias did not lead to toxic consequences, but I never said it did. The bias itself here is remarkable.
If my being a bit harsh on him basically consists of my not saying the above in the original comment, I’ll accept that; I could’ve noted in passing that the discussion that resulted was at the end largely a friendly and productive one.
Yesterday I received the following message from user “admin” in my Less Wrong inbox:
We were unable to determine if there is a Less Wrong wiki account registered to your account. If you do not have an account and would like one, please go to your preferences page.
I got this, too. I was concerned that it was might not be what it claimed to be, and avoided clicking the link. I view with suspicion anything unexpected that points me anywhere I might reasonably input login details.
Does that link actually work for you? If I enter my password, it briefly says “submitting” and the button moves to a different spot, but it doesn’t seem to create a wiki account.
That private message was part of a new feature to encourage wiki participation, by helping existing Less Wrong users onto wiki accounts. Unfortunately the link to create an account didn’t point to the right place.
If you tried to create a wiki account and had the brief flash of “submitting” (like Pfft), make sure you’ve got a validated email address associated with your account.
I am an Ashkenazi Jew. We are a population with many well-documented diseases tied to recessive alleles. It is unfair to force a minority population to have to pay massive sums of money so that we have to find out our own genetic situation. This applies to genes such as BRCA1, which causes cancer, or the alleles which causeTay Sachs and autonomic neuropathy type III, all cases where the documentation is strong. Ashkenazic Jews are not the only group in this situation, and there are also bad alleles which are not more common with specific ethnic or racial groups. The individuals with those genes deserve the same benefits.
The FDA’s move is a step in the wrong direction which interferes with the fundamental right to know about one’s own body.
The last line I added in part to aim at the current left-wing attitudes about personal bodily integrity. I stole the less well known disease from Yvain’s excellent letter here, where I got to find about yet one more fun disease potentially in my gene pool. I strongly recommend people read Yvain’s letter.
One piece of common wisdom on LW is that if you expect that receiving a piece of information will make you update your beliefs in a certain direction, you might as well update already instead of waiting. I happened to think of one exception: if you expect that something will cause a change in your beliefs when it shouldn’t, because it uses strong rhetorical techniques (e.g. highlighting highly unrepresentative examples) whose effect you can’t fully eliminate even when you know that they’re there.
(I have a feeling that this might have been discussed before, but I don’t remember where in that case.)
One piece of common wisdom on LW is that if you expect that receiving a piece of information will make you update your beliefs in a certain direction, you might as well update already instead of waiting.
It’s more like, if you expect (in the statistical sense) that you will rationally update your beliefs in some direction upon receiving some piece of evidence, then your current probability assignments are incoherent, and you should update on pain of irrationality. It’s not just that you might as well update now instead of waiting. But this only applies if your expected future update is one that you rationally endorse. If you know that your future update will be irrational, that it is not going to be the appropriate response to the evidence presented, then your failure to update right now is not necessarily irrational. The proof of incoherence does not go through in this case.
if you expect that something will cause a change in your beliefs when it shouldn’t
This seems like a breakdown in reflective consistency. Shouldn’t you try to actively counter/avoid the expected irrationality pressure, instead of (irrationally and meekly) waiting for it to nudge your mind in a wrong direction? Is there a specific example that prompted your comment? I can think of some cases offhand. Say, you work at a failing company and you are required to attend an all-hands pep talk by the CEO, who wants to keep the employee morale up. There are multiple ways to avoid being swayed by rhetoric: not listening, writing down possible arguments and counter arguments in advance, listing the likely biases and fallacies the speaker will play on and making a point of identifying and writing them down in real time, etc.
No specific examples originally, but Yvain had a nice discussion about persuasive crackpot theories in his old blog (now friends-locked, but I think that sharing the below excerpt is okay), which seems like a good example:
When I was young I used to read pseudohistory books; Immanuel Velikovsky’s Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.
And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn’t believe I had ever been so dumb as to believe Velikovsky.
And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.
And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn’t so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas. After all, Noah’s Flood couldn’t have been a cultural memory both of the fall of Atlantis and of a change in the Earth’s orbit, let alone of a lost Ice Age civilization or of megatsunamis from a meteor strike. So given that at least some of those arguments are wrong and all seemed practically proven, I am obviously just gullible in the field of ancient history. Given a total lack of independent intellectual steering power and no desire to spend thirty years building an independent knowledge base of Near Eastern history, I choose to just accept the ideas of the prestigious people with professorships in Archaeology rather than the universally reviled crackpots who write books about Venus being a comet.
I guess you could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments are just going to be a bad idea so I don’t even try.
As for trying to actively counter the effect of the misleading rhetoric, one can certainly try, but they should also keep in mind that we’re generally quite bad at this. E.g. while not exactly the same thing, this bit from Misinformation and its Correction seems relevant:
A study by Marsh, Meade, and Roediger (2003) showed that people relied on misinformation acquired from clearly fictitious stories to respond to later quiz questions, even when these pieces of misinformation contradicted common knowledge. In most cases, source attribution was intact, so people were aware that their answers to the quiz questions were based on information from the stories, but reading the stories also increased people’s illusory belief of prior knowledge. In other words, encountering misinformation in a fictional context led people to assume they had known it all along and to integrate this misinformation with their prior knowledge (Marsh & Fazio, 2006; Marsh et al., 2003).
The effects of fictional misinformation have been shown to be stable and difficult to eliminate. Marsh and Fazio (2006) reported that prior warnings were ineffective in reducing the acquisition of misinformation from fiction, and that acquisition was only reduced (not eliminated) under conditions of active on-line monitoring—when participants were instructed to actively monitor the contents of what they were reading and to press a key every time they encountered a piece of misinformation (see also Eslick, Fazio, & Marsh, 2011).
There’s an intermediate step of believing things because you expect them to be true (rather than merely convincing). It’s fully corrected if you use correlates-to-truth over convincingness for the update.
In other words, if you expect the fifth column more if you see sabotage, and more if you don’t see sabotage, then you can reduce that into just expecting the fifth column more.
I’ve been teaching myself the basics of probability theory (I’m sixteen) but I’m having trouble on the first step. My basic definitions of probabilities are all frequentist, and I don’t know a good Bayesian source appropriate for a secondary school student. Is Jaynes’ PT:LOS able to be read by moi, given that I know basic set theory? If not, can anyone recommend a different textbook?
Jayne’s book probably requires a university undergraduate-level familiarity with probability theory to fully appreciate.
I’d say that for the time being you don’t need to worry about bayesianism vs. frequentism. Just learn the basics of probability theory and learn how to solve problems.
Thanks for being the one commenter who told me how tough the book is—I’m leaving it for now, and the below recommendation of ‘Understanding Uncertainty’ was very useful for understanding what a probability is. After that, I’ve got some basic probability textbooks waiting to go. Cheers.
It’s worth knowing that what Jaynes calls “probability” everyone else calls “statistics.”
Generally, “probability theory” means studying well-specified random models. In some sense this is frequentist, but in another sense the distinction does not apply. Whereas “statistics” is about subjective ignorance.
And simulation theory is kinda the opposite of statistics—whereas in statistics you deduce the distribution from sample data, in simulation you compute plausible sample data from a given distribution.
If you’re looking for an elementary introduction to Bayesian probability theory, I recommend Dennis Lindley’s Understanding Uncertainty. A lot more accessible than Jaynes, but not dumbed down. It’s informal, but it covers a number of quite sophisticated topics.
Lindley is one of the architects of the Bayesian conspiracy.
Given that PT:LOS is free online you can just try reading it. Even if you don’t understand all the maths (do you know some calculus?) you’ll still be able to read his verbal explanations of things, which will give you a good idea of the distinction between frequentist statistics and Bayes.
IIRC the version that’s online is not the same as the dead-tree version you can buy; the latter has extra material and bugfixes. (I do, none the less, think reading the online version is a good way for Benito to determine whether he finds it approachable.)
With math, it’s useful to be able to distinguish books you can’t understand because you’re missing prerequisite knowledge from books you can’t understand because you just aren’t reading them carefully enough. The prevailing wisdom seems to be that you can’t really expect to be able to follow Jaynes through if you pick it up as your first serious textbook on probability.
Agh. Please do not abuse English (or French) this way; what did they ever do to you? What you want to say is “Can I understand Jaynes’s PT:LOS?” This places the action where it belongs, with a human. A book is not “able to be read” by anyone. I am able to type, because I can perform the action of moving my fingers on the keyboard. Being read is not an action; consequently there is no such thing as “able to be read”. And even if there were, a book would not have that ability, because books do not perform actions. Additionally, that is one of the ugliest passive-voice constructs I’ve ever seen; and I’ve read quite a bit of unpublished academic writing. (And if you think the average journal article is awful, you should see what they’re like before the internal reviewers exercise their judgement, such as it is.) Finally, ‘moi’ for ‘me’ might have been archly funny or ironically pretentious the first two or three times it was used, in the sixties. The eighteen sixties.
Thumbs up to Benito for having the interest in these topics at that age. Rolf, why the rant against him? We should be encouraging to young people interested in rationality and bayesian probability.
Rolf’s comment is a fine example of the aphorism ‘praise should be delivered in public, criticism in private’. When I spot someone making a grammar error or formatting error or other minor error, I try to PM them rather than make a public comment. For two reasons:
People really don’t care, and a minor correction shouldn’t permanently clutter up comment threads. People reading Benito’s request for help don’t care whether people dislike the French. Yes, Rolf is right that it’s a little annoying and offputting. But if people don’t want to read his gratuitous use of French, they especially don’t want to read 5 or 15 comments debating it. So criticizing him with a public comment is wasteful of other peoples’ time.
Criticizing like that in public is especially likely to make someone slightly angry or to lash back or ignore it. So criticizing him with a public comment is less likely to accomplish the claimed goal of improving his writing.
I’ll note that, whilst I found Rolf’s comment mildly musing, it did not have a significant effect on the probability of me speaking like that in the future.
Gwern mentioning more in passing that it was a little annoying and off-putting, without being aggressive or rude about it, has affected me—I wasn’t aware it was either. I probably won’t use it again.
An interesting factoid. Drawing implications is left as an exercise for the reader.
″...for two decades, all the Minuteman nuclear missiles in the US used the same eight-digit numeric passcode: 00000000. … And while Secretary of Defense Robert McNamara directly oversaw the installation of PALs on the US-based ICBM arsenal, US Strategic Command generals almost immediately had the PAL codes all reset to 00000000 to ensure that the missiles were ready for use regardless of whether the president was available to give authorization.” (source)
duplicate. I’m surprised I can only find this one.
The original source is Bruce Blair, 2004, who has made related complaints since 1977. Supposedly Eric Schlosser’s book (2013) is an independent source. Luke quotes it at length here, but not about the zeros. The most common source is Steven Bellovin, who makes some historical remarks here more candidly than most accounts.
Looking for people older than me (I’m 26) to tell me their memories of what kind of nutrition messages they remember getting from Nutrition Authority Type People (USDA or whatever).
The reason I ask is because I read a bunch of Gary Taubes over the weekend, and at first glance his claims about what mainstream nutritionists have been saying strike me as… not what I’ve experienced, to put it mildly. In particular, the nutritiony stuff I learned as a kid was always pretty clear on sugary soda and snacks being bad for you. Charitable hypothesis: maybe mainstream nutrition messaging was much crazier in the 80s? I don’t actually think this is likely but I thought I’d ask.
I may be a bit older than you’re looking for (44, grew up in small town Indiana) but it just so happens I was back in the US for Thanksgiving and happened to discuss nutrition education with other members of my family.
All of the nutrition education I remember was structured in terms of the four main food groups: meat, dairy, grain, fruit & vegetables—focusing on the idea that these should all be represented in a balanced meal. We also were taught about nutritional content, mainly which vitamins are represented in which food groups (and which specific foods), but almost entirely separately from “meal planning”. This was hardly changed from the nutrition education my parents received some 20 years previously.… although not surprising as a few of the teachers were the same!
My younger siblings (38, 40) saw the introduction of the fifth food group, fats & sugars as I recall, presented as bad things that should be avoided. Also the presentation of the four food groups was somewhat altered, bringing nutritional balance (and the “recommended daily allowance”) a bit more to the forefront in meal design.
(All of the above is based on our memories of nutrition education which may be highly flawed!)
That they recommended that people reduce their fat intake (which is definitely true) but then he tries to pin increased consumption of sugary crap on them (which is much less credible). For example:
The perversity of this alternative hypothesis is that it identifies the cause of obesity as precisely those refined carbohydrates at the base of the famous Food Guide Pyramid—the pasta, rice and bread—that we are told should be the staple of our healthy low-fat diet, and then on the sugar or corn syrup in the soft drinks, fruit juices and sports drinks that we have taken to consuming in quantity if for no other reason than that they are fat free and so appear intrinsically healthy.
“Sugary crap” is just shorthand for “the sugary stuff everyone agrees is bad for you.” The badness of e.g. sugary soda is pretty uncontroversial among nutritionists, “low-carb” or otherwise.
It was my impression that dieticians recommend avoiding processed sugar because of the lack of nutrients, thus making it easy for a diet high in processed sugar to have too many calories and not enough nutrients. Also, that in people with a genetic predisposition to insulin resistance, diets high in sugar have been shown to be correlated with developing insulin resistance and diabetes.
I have never seen a professional dietician refer to ‘sugary stuff’ as ‘bad for you’.
That terminology has always confused me. What, sucrose is not a nutrient? Why not?
Not to mention that this is talking apples and oranges—calories are a term from the physics-level description and nutrients are a term from the biochemistry-level description.
The correct word is micronutrients. Perhaps some people mistakenly interchange the words.
Mass media uses “nutrients” in the sense of “a magical substance, akin to aether or flogiston, that makes you thin and healthy”. It is mostly generated by certificates of organic farming and is converted into its evil twin named “calories” by a variety of substances, e.g. anything connected to GMOs.
You’re right that sucrose can indeed be considered a nutrient, but I’m just using the word to refer to essential nutrients i.e. molecular groups that you need to consume in your diet for the proper functioning of human biochemistry and cannot be substituted for anything else. As Nornagest says, these are vitamins, minerals, essential amino acids and essential fatty acids. Sucrose is not any of these so it is not an essential nutrient.
I don’t see why ‘comparing apples and oranges’ invalidates the argument, though. What difference does it make if they refer to different processes?
I also agree that nutrition is extremely contentious and politically charged.
Well, essential nutrients are a bit different thing, but even that doesn’t really help. The issue here is that there is an unstated underlying assumption that everyone needs all the essential nutrients and the more the better.
To give an example, iron is an essential nutrient. Without it you get anemia and eventually die. So, should I consume more of this essential nutrient? In my particular case, the answer happens to be no—I have a bit too much iron in my blood already.
Unsurprisingly, for many essential nutrients you can have too much as well as too little. And yet the conventional wisdom is that the more nutrients the better.
Human biochemistry is very complicated and all the public discourse about the diet can manage is Less calories! More nutrients! Ugh.
(yes, I know, I’m overstating things for dramatic effect :-P)
I agree with you that ‘more nutrients!’ is not sound advice, but again, I never said anything like that, not even implicitly.
Human biochemistry is indeed very complicated. That’s exactly why I responded to ChrisHallquist’s remark about ‘sugar being bad’, because I feel that that is vastly oversimplifying the issues at hand. For instance, simple sugars like fructose exist in fruit, and not necessarily in small amounts either. Yet I don’t think he would argue that you should avoid all fruit.
For instance, simple sugars like fructose exist in fruit, and not necessarily in small amounts either.
What do you mean by small amounts? In the context of Taubes claiming that people are drinking soda because they don’t realize it’s unhealthy, this is the amount you’re comparing it with. (For comparison, that’s the amount in fruits.)
I once tried to plan a very simple diet consisting of as few foodstuffs as possible. Calculating the essential nutrient contents I quickly realized that’s not possible and it’s better to eat a little bit of everything to get what you need, unless of course, you take supplements.
Anyone else notice at least three of the soylent guys seem to have this unusual flush on their cheeks? Is this just sheer vitality glowing from them or could there be somethingelse going on? :)
I’ve seen several pictures of Rob and his face seems to be constantly red.
Do you know if their Soylent recipe uses carrots or other pigmented vegetables? It could be an accumulation of the coloring. (This apparently happened to me as an infant with carrots. Made my face red/orangish.)
The early version contains carotenoids found in pigmented vegetables, at least lycopene found in tomatoes, and alpha-carotene found in carrots. It seems you’d get much less carotenoids from Soylent than just eating one tomato and one carrot per day.
He mentions “not very scientific, but the males in my family have always loved tomatoes.” Perhaps that’s the explanation and not Soylent, although you get three times less carotenoids from tomatoes compared to carrots so you’d probably have to eat ridiculous amounts of them to become red. Perhaps they love carrots too.
It seems you’d get much less carotenoids from Soylent than just eating one tomato and one carrot per day.
Early recipe, and practically speaking, I don’t know what the effects of one tomato & carrot a day would be! Rhinehart and the others have been on Soylent for, what, a year now? That’s a long time for stuff to slowly accumulate. Most people don’t eat a single vegetable that routinely. During the summer I eat 1 tomato a day (we grow ours) without glowing, but then I don’t eat any tomatoes during spring/winter, which is disanalogous.
Does anyone actually think that the optimal amount of calories is zero and the optimal amount of nutrients is infinity? I haven’t seen many people taking a dozen multivitamins a day but otherwise fasting, so...
(If what they actually mean is that more people in the First World are eating more calories than optimal than fewer, and vice versa for certain essential nutrients, I’d guess they’re probably right.)
Then again, it’s hard for most people to think quantitatively rather than qualitatively, but that doesn’t seem to be a problem specific to nutrition.
Does anyone actually think that the optimal amount of calories is zero and the optimal amount of nutrients is infinity?
It’s common for people to think that they (or others) should consume less calories and more nutrients. They generally stop thinking before the question of “how much more or less?” comes up.
It’s common for people to think that they (or others) should consume less calories and more nutrients.
And sometimes they are right.
They generally stop thinking before the question of “how much more or less?” comes up.
True that, but that doesn’t seem to be specific to nutrition.
(That said, I am peeved by advice that assumes which way the listener is doing wrong, e.g. “watch less TV and read more books” rather than “don’t watch too much TV and read enough books”.)
calories are a term from the physics-level description and nutrients are a term from the biochemistry-level description.
Um, no. Nutrients are things your body needs to function. Some, but not all, of them can be burned for calories. They can also be used for other things.
In this context, I’d take “nutrients” to refer loosely to the set of things other than food energy that we need to consider in diet: vitamins, dietary minerals (other than sodium, usually), certain amino acids and types of fat, and so forth. That doesn’t map all that closely to the biochemical definition of a nutrient, but I don’t expect too much from pop science, especially not in a field as contentious and politically charged as nutrition.
Oh, I don’t expect much from it at all, but unfortunately this terminology is pervasive and, IMHO, serves to confuse and confound thinking on this topic.
Wires crossed moment. Yes they were indeed, pity they were sooo wrong and that the word fat is conflated with a dietry meaning and a physiological energy storage meaning. In other words people hear “make me fat” when you mention fat and how one(me specifically) eats so much of it.
Peter Attia and Gary Taubes have set up NUSI to get some much needed science behind optimal diet.
This sounds familiar to me. I’m 32 and I definitely remember hearing stuff like this. I remember in elementary school (so, late 80s early 90s) seeing the Canada food guide recommend a male adult eat something like up to 10 servings of grains a day, which could be bread or pasta or cereal. You were supposed to have some dairy products each day, maybe 2-4. And maybe 1-3 servings from Meat & Alternates.
I remember that pretty much all fat was viewed (popularly) with caution, at least until Udo Erasmus came out with his book Good Fat, Bad Fat.
But I do recall a clear message that soda and snacks were unhealthy. It wasn’t as though soda was thought ok just because it was low fat / high carb.
Does he argue there was a change of opinion in the 80s or before that? If I recall correctly, he argues that the guidelines have remained roughly the same for decades, or even changed for worse.
I would like some feedback on a change I am considering in my use of some phrases.
I propose that journal articles be called “privately circulated manuscripts” and that “published articles” should be reserved for ones that be downloaded from the internet without subscription. A more mild version would be to adopt the term “public article” and just stop using “published article.”
I think that if you do this and few others do, the main result will be to confuse your readers or hearers—and of those who are confused, when you’ve explained I fear that a good fraction of those who didn’t already agree with you will pigeonhole you as a crank.
Which is a pity, because it would be good for far more published work to be universally accessible than presently is.
A possibly-better approach along similar lines would be to find some term that accurately but unflatteringly describes journals that are only accessible for pay (e.g., “restricted-access”) and use that when describing things published on such terms. That way you aren’t redefining anything, you aren’t saying anything incorrect, you’re just drawing attention to a real thing you find regrettable. You might or might not want a corresponding flattering term for the other side (e.g. “publicly accessible” or something). “There are three things worth reading on this topic. There’s a book by Smith, a restricted-access journal article by Jones, and a publicly-accessible paper by Black.”
You don’t think “privately circulated manuscript” is 100% accurate?
I think it’s pretty clear to say “a privately circulated article by Jones and a published paper by Black,” at least as long as I provide links.
The ambiguity I’m concerned about is where my comment is very short; the typical situation is providing the public version to someone who cited the private version.
“Privately circulated” implies something that’s only available to a very small group and not widely available. This might be a fair characterization in the case of some very obscure journals, but we might reasonably expect that most of the universities in the world would have subscriptions to journals such as Nature. According to Wolfram Alpha, there are 160 million students in post-secondary education in the world, not including faculty or people at other places that might have an institutional subscription.
Even taking into account the fact that not all of “post-secondary education” includes universities but probably also includes more vocational institutions that likely don’t subscribe to scientific journals, we can probably expect the amount of people who have access to reasonably non-niche journals to be in the millions. That doesn’t really fit my understanding of “privately circulated”.
Would you consider Harry Potter not to have been published because it is not being given away for free? Why should “published articles” be defined differently from “published books”?
Everyone applies “published” differently to books and articles. In fact, most people use “published article” to mean “peer-reviewed article,” but even ignoring that there are pretty big differences.
Why did you choose to make this comment here, rather than in response to my original comment?
You don’t think “privately circulated manuscript” is 100% accurate?
No, I read “privately circulated” as distributed to a limited and mostly closed circle. If anyone with a few bucks can buy the paper, I wouldn’t call it “privately circulated”.
as always a phrase being technically 100 percent correct has a lot less to do with whether it’s understood as intended than you might think. a privately circulated manuscript implies the protocols of the elders of zion to me.
Wouldn’t it be more practical to simply adopt a personal rule of jailbreaking (if necessary) any paper that you cite? I know this can be a lot of work since I do just this, but it does get easier as you develop the search skills and is much more useful to other people than an idiosyncratic personal vocabulary.
I think there have been past threads on this. The short story is Google Scholar, Google, your local university library, LW’s research help page, /r/Scholar, and the Wikipedia Resource Request page.
I wonder if “pirating” papers has any real chance of adverse repercussions.
I have 678 PDFs on gwern.net alone, almost all pirated, and perhaps another 200 scattered among my various Dropboxes. These have been building up since 2009. Assuming linear growth, that’s something like 1,317 paper-years (((678+200)/2)*3) without any warning or legal trouble so far. By Laplace, that suggests a risk of trouble per paper-year of 0.076% (((1+0)/(1317+2)) * 100). So, pretty small.
There is no dichotomy. Word choice is largely independent of action. You set a good example, but you cite very few papers compared to your readers. Word choice to nudge your readers might have a larger effect. Do your readers even notice your example?
My question is how to get people to link to public versions, not how to get them to jailbreak. I think that when I offer them a public link it is a good opportunity to shame them. If I call it an “ungated” link, that makes it sound abnormal, a nice extra, but not the default. An issue not addressed by my proposal is how to tell people that google scholar exists. Maybe I should not provide direct links, but google scholar links. Not search links, but cluster links (“all 17 versions”), which might also be more stable than direct links.
I don’t know. I know they often praise my articles for being well-cited, but I don’t know if they would say the same thing were every citation a mere link to Pubmed.
My question is how to get people to link to public versions, not how to get them to jailbreak. I think that when I offer them a public link it is a good opportunity to shame them. If I call it an “ungated” link, that makes it sound abnormal, a nice extra, but not the default
If you just want to shame them, then there’s much more comprehensible choice of terms. For example, ‘useful’ or ‘usable’. “Here is a usable copy”—implying their default was useless.
Universities have a lot subsriptions so that their students can access journal articles for free, so “privately circulated” perhaps isn’t as accurate as you’d like to think. Journals can also be accessed from libraries.
That you are the type of person who thinks that all research should be freely available and charging for access to scientific journals is morally wrong. (You likely also prefer Linux over Windows because MS is evil, but put up with Apple because it is cool.)
Is there a better expression for the “my enemy must be the friend of my other enemy” fallacy, or insistence on categorizing all your (political or ideological) opponents as facets of the same category?
What Is the Enemy of My Enemy? Causes and Consequences of Imbalanced International Relations, 1816–2001
Abstract:
This study explores logical and empirical implications of friendship and enmity in world politics by linking indirect international relations (e.g., “the enemy of my enemy,”“the enemy of my friend”) to direct relations (“my friend,”“my enemy”). The realist paradigm suggests that states ally against common enemies and thus states sharing common enemies should not fight each other. Nor are states expected to ally with enemies of their allies or with allies of their enemies. Employing social network methodology to measure direct and indirect relations, we find that international interactions over the last 186 years exhibit significant relational imbalances: states that share the same enemies and allies are disproportionately likely to be both allies and enemies at the same time. Our explanation of the causes and consequences of relational imbalances for international conflict/cooperation combines ideas from the realist and the liberal paradigms. “Realist” factors such as the presence of strategic rivalry, opportunism and exploitative tendencies, capability parity, and contiguity increase the likelihood of relational imbalances. On the other hand, factors consistent with the liberal paradigm (e.g., joint democracy, economic interdependence, shared IGO membership) tend to reduce relational imbalances. Finally, we find that the likelihood of conflict increases with the presence of relational imbalances. We explore the theoretical and practical implications of these issues.
Recently found this paper, entitled “On the Cruelty of Really Teaching Computer Science” by Dijkstra (plaintext transcription here). It outlines ways in which computer programming had failed to (and still has) actually jump across the transformative-insight gap that led to the creation of the programmable computer. Probably relevant to many of this crowd, and very reminiscent of some common thoughts I’ve seen here related to AI design.
In the same place I found this paper discussed, there was mention of this site, which was recommended as teaching computer science in a way implementing Dijkstra’s suggestions and this textbook, similarly. I can’t vouch for them personally yet, but this might be an appropriate addition to the big list of textbooks.
Dijkstra’s ideas may be relevant to safety-critical domains (at least to some extent) but the article is flagrantly ignoring cost-benefit tradeoffs. Empirically we see that (manual) proof-oriented programming remains a small niche while test-driven programming has been very successful.
He’s certainly not ignoring cost-benefit tradeoffs. He acknowledges this as a perceived weak point, and claims that, when practiced properly, the tradeoff is illusory. (I rate this unlikely but possible, around 2% that it’s purely true and another ~20% that the cost increase is greatly exaggerated.)
I’m pretty sure Dijkstra would argue (and I’m inclined to agree) that proof-oriented programming hasn’t gotten a fair field test, since the field is taught in the test-driven paradigm and his proof-oriented teaching methods were never widely tried. There’s definitely some status quo bias at work; the critical question is whether Dijkstra’s methods would pass the reversal test, and if so how broadly. My intuition suggests “Yes, narrowly with positive outlook”; as we move toward having more and more information on cloud-computing servers and services and social networks, provably-secure computing seems likely to be appealing in increasingly broad applications, particularly when you look at large businesses wanting to reap the benefits of new technologies but very leery of the negative consequences of bugs.
And of course, even in the status quo, these methods still have relevance to anyone looking to make high-risk things like AI.
I’m pretty sure Dijkstra would argue (and I’m inclined to agree) that proof-oriented programming hasn’t gotten a fair field test, since the field is taught in the test-driven paradigm and his proof-oriented teaching methods were never widely tried.
I would be skeptical of this claim, given how diverse the field of software engineering is, and many programmers are both self-taught and mathematically talented, so they would be prone to trying out neat things like proof-oriented programming even if mainstream schools only taught the test-driven paradigm. At the same time, many schools actually focus on teaching computer science instead of software engineering, taking a much more theoretical and mathematical approach than what most programmers will ever actually need. People coming from these backgrounds would also seem to be inclined to try out neat formal methods. (If they pursued an academic career, they could even do so without profitability concerns.)
Dijkstra’s general senitment seems to be that applying existed engineering practices from civil, mechanical, electrical, etc. engineering disciplines to computer science is woefully inadequate. With this, I agree. I also agree that there seems to be some weird set of beliefs in mathematical culture that the human brain is superior to a computer and that no computer could ever do mathematics like a human could (I’ve seen even prominent mathematicians use Godel’s theorem as bogus ‘evidence’ of this).
But the problem is that there doesn’t seem to be a viable alternative to the status quo of software engineering, not at the moment. The only type of radical new thinking that I am aware of is the functional programming approach to things taken by e.g. haskell. But there are a lot of issues there as well. So far, productivity has been far higher using the more traditional way of doing things.
I did some Googling after reading the article and found this book by Dijkstra and Scholten actually showing how a first-order language could be adapted to yield easy and teachable corectness proofs. That is actually amazing! I have a degree in CS and unfortunately I’ve never seen a formal specification system that could actually be implemented and not be just some almost-tautological mathematical logic, like so many systems that exist in the academia. Thanks very much for the link.
If you are interested in this kind of thing, you should check out Dafny. It’s a programming language with Hoare-logic style pre- and postconditions (and the underlying implementation computes weakest preconditions, Dijkstra-style). But what sets it apart is that it is backed by an automatic theorem prover (Z3) which is sufficiently powerful to handle most things that seem trivial to a human. To me Dafny feels like the promise of programming verification research in the 1970s finally came through: you can carry out program verification like you would with pen and paper, without being overwhelmed by finicky algebraic manipulations.
Mathematicians (and Dijkstra qualifies as one) have been bemoaning the lack rigour in undergraduate education for some time now. (Aye, even as early as the French vs. English trigonometry textbook debates of the 1800s.) The United States has a peculiar cultural mismatch between the relative quality of secondary and undergraduate education, which in my mind causes most of the drama. In particular, EWD1036 was written during Dijkstra’s career at UT Austin.
I’d like to know if this phenomena is global, though.
If the human race is down to 1000 people, what are the odds that it will continue and do well? I realize this is a nitpick—the argument would be the same if the human race were reduced to a million or ten million.
It’s an interesting question. The Toba Catastrophe Theory suggests that human population reached as low as 10,000 individuals during a climate change period linked to supervolcano eruption. Another theory suggests that human population reached as low as 2000 individuals. Overall I think 1000 individuals is enough genetic diversity that humans could recover reasonably well.
The real problem seems to me to be whether humans could ever catch up to where we are after being knocked down so low. Some people have suggested that if civilization collapses humanity won’t be able to start a new industrial revolution due to depleted deposits of oil and surface minerals.
Oil (and coal, which is less topically sexy but historically more significant to industrialization) is the big problem, though rare earths and other materials that see use more in trace than in concentration could also be an issue. If you’re a medieval-level smith, you probably wouldn’t care too much whether you’re getting your Fe from bog iron nodules or from the melted skeletons of god-towers in the ruins of Ellae-that-Was, although certain types of bottleneck event could make the latter problematic for a time.
Still, I’d be willing to bet at even odds that that wouldn’t be a showstopper if it came to it.
The real problem seems to me to be whether humans could ever catch up to where we are after being knocked down so low. Some people have suggested that if civilization collapses humanity won’t be able to start a new industrial revolution due to depleted deposits of oil and surface minerals.
On the other hand, these future humans would probably be able to learn things like science much more quickly because of all the information we have lying around everywhere.
Our information storage media has a surprisingly short shelf life. Optical disks of most types degrade within decades; magnetic media is more variable but even more fragile on average (see here and the linked pages). There are such things as archival disks, and a few really hardcore projects like HD-Rosetta, but they’re rare. And then there’s encryption and protocol confusion to take into account.
A couple centuries after a civilization-ending event, I’d estimate that most of the accessible information left would be on paper, and not a lot of that.
Audio cuts out at around 38 minutes, after that there is no sound from Eliezer’s mic, so it’s apparently relying on the camera mic which makes the recording noisy and hard to hear.
LW meta (reposted, because a current open thread did not exist then): I have received a message from “admin”:
We were unable to determine if there is a Less Wrong wiki account registered to your account. If you do not have an account and would like one, please go to your preferences page.
I have seen, indeed, options to create a wiki account. But I already have one; how do I associate the existing accounts?
A related question: I clicked the (modified) URL that “admin” sent me, and the page contained a form where I could fill in my LW password in order to create a wiki account. I submitted it but I cannot login on the wiki with my LW credentials. What’s going on?
Today I skim-read Special Branch (1972), the first book-length examination of Good’s “ultra-intelligent machine.”
It is presented in the form of a 94-page dialogue, and the author (Stefan Themerson) is clearly not a computer scientist nor an analytic philosopher. So the book is largely a waste of attempted “analysis.” But because I’m interested in how ideas develop over time and across minds, I’ll share some pieces of the dialogue here.
A detective superintendent from “special branch,” named Watson, meets up with the author (the dialogue is written in first person), and explains that a team is building Good’s ultraintelligent machine. They both refer to the machine with female pronouns, and apparently “she” will be an odd machine indeed (p. 25):
“Dr. Good calculated it would cost 10^17 pounds to build her. Can you imagine!”
“Oh,” [Watson] said, unimpressed. “Dr. Good didn’t know one thing. He thought of actually manufacturing a kind of simulation of the neural network, millions and millions of single cells put together. But he overlooked the most obvious place where we can get thousands and thousands of whole neural assemblies all ready for us, and almost for next to nothing… in a slaughterhouse.”
Soon, the author gives some pieces of advice to those making the ultraintelligent machine:
No ought-arguments should be built into the machine (p. 26). “As she is a logical machine, it’s obvious that you can’t feed any ought-arguments into her. Because there is no logical argument to tell her why one ought not to kill or cheat or oppress or tyrannize.”
Don’t put any beliefs into the machine (p. 29).
Don’t let the machine read Plato first (p. 59).
After much further discussion, the book ends with a scene after the ultraintelligent machine has been built (p. 93):
There she was, suspended in the centre of the room, cool, silvery, Ultra-Intelligent.
“Are you ready?” I asked...
“I am ready,” she answered...
“Listen,” I said, “the question I am asking you is as follows: ‘What is the question I should ask you?’”...
“The question you asked was the only question I do not have an answer to” she said, and added “End of message.”
But I still heard murmurs creeping within her sphere. I stood up and put my ear to her silvery surface.
“Naughty boy. What a question. Miss my period. Silly boy. Put such question into me. Circular question. I need abortionist. Silly boy.”
Suddenly, the murmur exploded into a scream “How dare you? You eavesdropper!” she shouted, and at the same moment millions of eyes appeared on her silvery surface, human eyes, and fish-eyes, and fly-eyes; tele-eyes, and micro-eyes, and radar eyes; eyes to see and ears to hear, and noses to smell, and taste-buds to taste, and all sorts of legs and all sorts of hands and wings and fins and tails and jaws that bit and claws that catch—
I jumped away from her, threw myself down on my couch, inserted my hand between its edge and the wall where the switch was, and turned it off. She died. I was still alive. I am still alive. But I know that one day someone will come and switch her on again.
Here are two (correct) arguments that are highly analogous.
Brownian motion, the fact that a particle in water or air does not come to rest, but dances at a minimal rate is an important piece of evidence for the atomic hypothesis. Indeed, Leucippus and Democritus are said to have derived the atomic hypothesis from Brownian motion; certainly Lucretius provided it as evidence.
Similarly, Darwin worried that “blending” inheritance would destroy variation in quantitative traits. He failed to reach the conclusion that heredity should be discrete, though.
I’m planning to run a rationality-friendly table-top roleplaying game over IRC and am soliciting players.
The system is Unknown Armies, a game of postmodern magic set in a creepier, weirder version of our own world. Expect to investigate crimes, decipher the methods behind occult rituals, interpret symbols, and slowly go mad. This particular game will follow the misadventures of a group of fast food employees working for an occult cabal (well, more like a mailing list) that wants to make the world a better place.
Sessions will be 3-4 hours once a week over IRC or google hangouts or skype or whatever people are most comfortable with. Slots for two-three players, email me at sburnstein@gmail.com if you’re interested or if I can answer any questions about the game.
Is there a name for the halo effect of words? There should be because one example of this is "Overdraft Protection".
EDIT: I am specifically referring to Debit Card Overdraft protection service
EDIT 2: I have been made aware that I am using the wrong term, overdraft service is the term most commonly used by major banks to refer to the “service” they offer on debit card overdrafts. If you see me refer to something as Overdraft Protection please assume I am referring to the service banks give you on debit card use
If you are from the States I am willing to bet that you have opened a bank account at some point in your life and was presented with the option to have Overdraft Protection. Say No. For most people saying no is the right answer. I think many people when asked about this on the spot don’t have enough time to think through what Overdraft Protection really is. Just because someone decided to name something “Protection” doesn’t mean it protects you from anything. It might even feel silly to opt-out of something that is offered for “free”, which is why I think a lot of people fall for this poor decision. Let me explain why you should opt-out.
If you pay for something that you do not have the funds to pay for, the bank will lend you the money or help you transfer the money from a linked account to cover your purchase. They charge anywhere from $12 to $34 dollars or more for this service. Chase is a major bank and they charge $34. If for example you forgot to deposit your paycheck and bought a $3 latte with only $1 in your checking account, Chase will “protect” you from having the purchase be declined for a fee of $34.
If you knew that you didn’t have enough money would you agree to pay $34 for chase to loan you the money? The answer is no. You would rather have your purchase be declined. There is no fee to being declined a purchase. In fact the real protection is to be declined the purchase and not borrow money at insane amounts of interest.
These fees stack per transaction. Most people are hit with fees because they were not aware they were lacking the funds, this means that usually multiple transactions are made the same day thinking everything is OK. So if you buy a latte for breakfast, lunch and dinner, Chase will charge you $102 …because you know, they are protection you from the embarrassment of being declined. Lucky you.
Too many people have Overdraft protection when they don’t need it and the problem isn’t that most people are to stupid to do simple math, it’s because they never really thought about the implications. They were rushed into agreeing to something with out thinking about it. Well, now you have thought about it, so you don’t have an excuse. If you don’t need overdraft protection, go and opt-out now. Please avoid the trap of thinking that you don’t ever overdraft so it doesn’t matter, that is a bad decision. Even if it were true that you rarely overdraft, why would you deliberately keep a potential landmine of fees under your feet?
When is overdraft protection appropriate? Very rarely it can come in handy when writing important checks for a mortgage or loan other than that, most people do not use checks to pay bills any more. I used to be a poor university student and as such we are the prime targets of these bank scams, so get smart and get rid of it today.
I bank with Chase, and unless the written information I’ve received from them is a straight-up lie (which would put them at risk for a lawsuit...) this information is factually inaccurate. What you describe as “overdraft protection” is actually the policies you’ll be subjected to without overdraft protection. Overdraft protection does come with fees, but they’re much less, no more than $10 a day.
(The moral of the story: don’t be overdrawn. It will cost you money in fees with or without overdraft protection.)
The confusion stems from the fact that there are two different services for Chase, one for check writing and one for debit cards. I am specifically talking about debit card usage and will edit my post to make it more clear.
Chase will charge you $10 per day for check writing overdraft protection on accounts that are linked, this is true. However for debit card use, you would be charge $0 if you opt-out and indeed pay $34 per transaction if you opt-in. The problem is that many banks combine checking and debit card usage in to one plan, and others like Chase split it up. My main point is that check writing is becoming very rare, and most people get dinged from fees using their debit cards. So if they are combined and you really don’t write checks, then you definitely should opt-out.
There is $34 fee for debit card overdraft protection and $0 fee for opting out(here and here). Does this resolve your disagreement?
(The moral of the story: don’t be overdrawn. It will cost you money in fees with or without overdraft protection.)
If you opt-out of debit card overdraft protection it will not cost you any money! If you opt-in for debit card overdraft protection it will cost you money. I know it sounds ridiculous, because it is.
Based on the links, Chase doens’t even call their service for debit cards “overdraft protection,” so this doesn’t support the original point about words misleading people. Also, it seems that if you have debit card coverage and overdraft protection, you’ll at most be changed $10/day for overdrawing with your debit card. Still better to use a credit card when you don’t have money in your checking account, obviously.
(Also, as Louie Helm recently pointed out, as long as you pay your balance in full every month, you’re better off using your credit card for everything because the rewards program will reduce the cost of everything you buy by 1% or more.)
Chase doens’t even call their service for debit cards “overdraft protection,” so this doesn’t support the original point about words misleading people
In the spirit of being helpful and trying to be as factually accurate as possible, I have edited my original post, as you are absolutely are correct about the terminology. I would only argue that I consider my original point to be merely a segway to introduce my main argument that debit card overdraft services are typically poor decisions.
Also, it seems that if you have debit card coverage and overdraft protection, you’ll at most be changed $10/day for overdrawing with your debit card.
I do not believe this is accurate.
However assume it is accurate, if you weigh the cost/benefit (again talking about debit card use) it IMO is still a terrible investment. My bank happens to be Wells Fargo and they charge $12, for debit OD service, better, but still pretty bad. But ultimately you must decide what is an acceptable fee. The vast majority of people getting dinged for debit card overdrafts, are not buying life saving medication, its more likely to be a cup of coffee or a hot dog. So if you asked them what they would have done if they knew they had insufficient funds, they would likely reject the $10 or $34 fee. This isn’t even considering that most banks are not obligated to tell you that you are overdrawn, so you could get dinged $10 a day until you finally realize it, as opposed to being notified right away from being declined. BTW since you’re a Chase customer Chase happens to waive the fee if you can fund your account by day end, but they aren’t obligated to inform you that you are negative.
you’re better off using your credit card
You’re better off using your credit card and saying no to debit card overdraft service – for the most part. Unless you frequently find yourself in the position of having your purchases must go though for what ever reason.
Oh wheee, this is what I worked on in DC. There are a few different things that can happen when you try to make a purchase on a debit card with insufficient funds:
the merchant sees you don’t have the money, the card is declined, and you pay the bank nothing
the bank transfers money from a linked account (usually a savings account or line of credit) and charges a fee for this service (median $10, at least back in 2012)
the bank covers the cost of the purchase, which you now need to pay back, along with a fee of (at median) $35
The law changed recently (in the last 5 years) so that banks have to ask you to opt-in to overdraft. If you take no action, when you try to buy something with your debit card you don’t have the money to cover, you just can’t do it, and you incur no fee. So, banks have done a big push to get people to opt-in, including using the “Overdraft Protection” language, but, for most people, it’s a bad choice.
And, fun fact, some banks reorder your purchases, when they’re processed, in order to maximize the number of overdrafts you incur. (i.e. if you had $20 in you account and bought, in order, items costing $5, $5, $5, $20, some banks reorder your purchases high-to-low so they can charge three overdraft fees instead of one). You can see a graphic with data from a real world case here.
Fun Fact, if you overdraw and are protected by a bank transfer from a linked account but that linked account is also insufficient, you get charged both fees – one fee for the transfer, and another for not having enough after the transfer! How can they justify this? Easy, the fee for just a transfer, not guaranteeing you that your transfer will be adequate.
Last month I signed up for a bank account at my local credit union, and they do offer overdraft protection of various sorts. One of the things that impressed me was that the woman who was setting up my account explained to me why I did not want overdraft protection, using a very similar example.
I cannot speak for all Banks policies, but that isn’t how the ‘overdraft protection’ on my account works. How mine (actually a credit union, maybe thats a difference) works is:
Without it, if I was to write a check with insufficient funds, I would get charged some large fee.
But with the Overdraft Protection, it will transfer money from my savings account to checking to cover it, for free, helping me avoid the fee. Essentially it lets me use the savings accounts as a safety net to avoid the charges.
This ‘protection’ has in fact saved me in a couple of instances.
UK banks lost a test case a few years ago that led to a lot of people getting back however many years of overdraft charges, plus interest. The same thing happened a bit later with “payment protection insurance”, intended to cover loan repayments if you lost your job, but with so many exclusions as to be almost worthless.
The end result was something like a forced savings policy. Cue people who avoided the initial trap wondering where their free money is.
Now that these advanced systems exist, they’ve been observed to compete with each other for scarce resources, and — especially at high frequencies — they appear to have become somewhat apathetic to human economies. They’ve decoupled themselves from the human economy because events that happen on slower human time scales — what might be called market “fundamentals” — have little to no relevance to their own success.
I’m curious about this, and specifically what’s meant by this “decoupling”. Anyone have a link to research about that?
It sounds somewhat like “financial AIs are paperclipping the economy” or possibly “financial AIs are wireheading themselves”, or both. If either is true, that means my previous worries about unfriendly profit-optimizers were crediting the financial AIs with too much concern for their owners’ interests.
Louie on G+ links an interesting pair of philosophy papers: http://plus.google.com/104557909419304580033/posts/jNdsspkqGH8 - An attempt to examine the argument from disagreement (’no two people seem able to agree on anything in ethics’) by using computer simulations of belief convergence. Might be interesting reading.
There are a couple of commercially available home eeg sets available now, has anyone tried them? Are they useful tools for self monitoring mental states?
I’ve been diagnosed with avoidant personality disorder and obsessive compulsive personality disorder, as well as major depression, about 4 months ago, and even though my depression has been drastically reduced by medication, I still often have suicidal thoughts. Does anyone have advice on dealing with this? It’s just hard to cope with feeling like I’m someone that it isn’t good or healthy to be around.
naive question (if you don’t mind): What sort of things trigger your self-deprecating feelings, or are they spontaneous? E.g. can you avoid them or change circumstances a bit to mitigate them?
The prospect of social interaction, whether it actually happens or not, can trigger it. Any time I start a project (including assignments at university), go back to edit something, and it doesn’t meet my standards, I get quite severe self-deprecating feelings.
For the second one I managed to mitigate it by changing my working process to something more iterative and focused on meeting the minimum requirements before optimizing. I still have not found a remotely serviceable solution for the social interaction problems, and the feedback loops there are more destructive too. At least with the perfectionism problem I can move to another project to help restore some of my self-esteem.
It’s easy to be sympathetic with these two scenarios—I get frustrated with myself, often enough. Would it be helpful to discuss an example of what your thoughts are before a social interaction or in one of the feedback loops? I’m not really sure how I’d be able to help, though… Maybe your thoughts are thoughts like anyone would have: “shoot! I shouldn’t have said it that way, now they’ll think...” but with more extreme emotions. If so, my (naive) suggestion would be something like meditation toward the goal of being able to observe that you are having a certain thought/reaction but not identify with it.
Evolution in humans does not work to produce an integrated intellectual system but produces the set of hacks better suited to the ancestral environment than any other human. Thus we should expect the average human brain to have quite insular but malleable capabilites. Indeed I have the impression that old arts like music try to repurpose those specific pathways in novel ways. Are there parts of our brains we can easily repurpose to aid in our quest for rationality?
I have a notion we aren’t just adapted to the ancestral environment—we’ve also got adaptations for low-tech agriculture (diligence, respect for authority) and cities (tolerance for noise, crowding, and strangers). Neither list is intended to be complete.
I’ve wondered whether people in separatist/supremacist movements have fewer city genes than average.
You mean like imagining you’re going to present an issue to an authority figure when thinking about it? Or something more wacky like converting reasoning problems into visual problems?
I am trying to find a post here and am unable to find it because I do not seem to have the right keywords.
It was about how the rational debate tradition, reason, universities, etc. arose in some sort of limited context, and how the vast majority of people are not trained in that tradition and tend to have emotional and irrational ways of arguing/discussing and that it seems to be the human norm. It was not specifically in a post about females, although some of the comments probably addressed gender distributions.
I read this post definitely at least six months and probably over a year ago. Can anyone help me?
I think it’s less than 25% probable that any of these is what you’re after, but (1) looking at them might sharpen your recollection of what you are after, (2) one or more might be a usable substitute for whatever your purpose is, and (3) others reading your comment and wanting to help now needn’t check those :-).
Someone led me to Emotional Baggage Check. The idea appears to be that people can leave an explanation of what’s troubling them, or respond to other people’s issues with music or words of encouragement. It sounds like a good idea (the current popular strategy of whining on a public forum seems to be more trouble than it’s worth). It doesn’t look particularly troll-proof, though.
If nothing else, I’d like to look at them in a year or so and see how it’s turned out.
Hey dude! I am the creator of that site. Hm yeah we are working on the whole troll-proof thing. Actually have a whole alert system set up but we still have more to work on. And yeah I’m pretty interested to see where we will be in a year too. Stay tuned.
Can someone change the front page so it doesn’t say “Lesswrong:Homepage”? This sounds like it is a website from 1995. Almost any other plausible wording would be better.
It’s funny time now in Slovakia; as if someone declared a call: “Irrational people of all beliefs, unite!”
It started two years ago with the so-called “Gorilla scandal”. (TL;DR: Not a real gorilla, just a nickname of some criminal who was investigated by the secret service. By wiretapping his house the investigation revealed that almost all of our political parties, both left and right, participated in economical crime, cooperating with the same small group of people. The transcripts of the investigation were leaked to internet.) It was followed by a few demonstrations, after which pretty much nothing happened. Realizing that most media in our country actually belong to people involved in the scandal, so they don’t have an incentive to investigate and report on the scandal, an internet radio called “the free broadcast” was created. From that point, it gradually went downhill.
By deciding to focus on ‘news that don’t have place in the official media’, the radio was gradually selecting for hoaxes, conspiracy theories, etc. Which probably led to saner people leaving the radio, concentrating the irrationality of the remaining ones. One year later, it was mostly about how vaccination causes autism and how pharmaceutical companies want to prevent you from using MMS. Two years later, it seems to be mostly about how freemasonic and homosexual conspiracies are ruling the world, which is why we need to make a revolution and create a direct democracy.
Meanwhile, a new religious cult called Magnificat was created by a few excommunicated Catholic priests; or perhaps they joined an already existing cult and brought it here, I am not sure. A few years ago there were some documents about social service in some other countries (mostly Britain) abusing their powers and taking children away from non-abusive families to offer them for adoption to other people, for money. This Magnificat cult is spreading news that this is all a part of world-wide conspiracy against the traditional family, led by atheists, homosexuals and people hating the Virgin Mary. Recently this cult has registered as a political party, and participated in the recent municipal elections (although I am not aware of any significant victories). They also organized a large protest “for life and traditional family”. When our Catholic church saw them stealing its main agenda, it doubled its efforts and last week it published a new pastoral letter criticizing the “culture of death”, “gender ideology” and “the sins of Sodom”.
Meanwhile, a local neo-nazi movement become strong enough to win the municipal election in one of our counties (luckily not the one I live in). Their main agenda is fighting against the corrupted democratic politicians and parasitic Roma minority. They are very popular on “the free broadcast” internet radio, together with Magnificat; at this moment they seem to support each other, at least memetically. This combination of signalling contrarianism and yet appealing to common prejudice, seems very attractive to a lot of people. These days you can’t have an online discussion without someone explaining what a brainwashed sheep you are for not believing in them.
A month ago I burned some of my social capital by publishing a blog about how the Catholic church should stop giving tacit approval to the neo-nazis. (And by “tacit approval” I mean things like the former arch-bishop organizing a private mass for the local neo-nazis and blessing their leader. Which was later described by the church speakers as simply his private affair, which is not anyone else’s business.) So far it doesn’t seem like there was any benefit from it, except for me feeling better for having spoken my mind openly.
I feel like surrounded by complete idiots. To be honest, I always had this feeling, but recently it became very intensive.
Seems to me that irrational people have the advantage that they can relatively easily join their powers. An irrational person mostly cares about one thing; a rational person cares about many things. Suppose that you have a person A believing that people are manipulated by space aliens; a person B believing that vaccination causes autism; and a person C believing that it’s all about homosexuals trying to destroy the traditional family. Technically, none of them contradicts the others. And if you succeed to create a complex theory containing all the necessary components (the space aliens are controlling the humankind by giving more political power to homosexuals, which use their power to destroy the traditional family by using vaccination to cause more autism), you already have three strong believers. And more people mean more political power! Meanwhile the rational person will disagree with A and B and C, and remain without any allies. The ability of an irrational person to accept a compatible irrational belief is popularly called “having an open mind”.
Sounds like an interesting real-world example of http://lesswrong.com/lw/lr/evaporative_cooling_of_group_beliefs/
On the plus side, now you have all the material you need to write a satirical novel.
I used to wish that people be more interested in how society works, go outside of their homes and try to improve things. After seeing this, I just wish they all lost interest, returned home, a started watching some sitcoms.
I wasn’t sure whether this largely political comment was okay to write on LW, but then I realized LW is pretty much the only place I know where I could write such comment without receiving verbal abuse, racist comments, explanations that homosexuality really is the greatest danger of our civilization, or offended complaints about how I am insensitive towards religion. Recently, LW feels like an island of sanity in the vast ocean of madness.
Perhaps this will give me more energy to promote rationality in my country. I already arranged another LW meetup after a few months pause.
Martin Odersky, the inventor of the Scala programming language, writes regarding a recent rant against Scala publicized on Hacker News:
The quotation is remarkable for its absolute lack of awareness of selection bias. Odersky doesn’t appear to even consider the possibility that he might be noticing the anti-Scala rants more readily than rants against other programming languages. Not having considered the possibility of the bias, he has no chance to try and correct for it. The wildly distorted impression he’s formed leads him to language bordering on conspiracy theories (“grand coalition of people who want to attack Scala”).
As someone who regularly reads Hacker News and other forums where such attacks are discussed, I have noticed a few widely discussed blog posts against Scala in the last few years, but there hasn’t been a flood of them, nor do they seem unusually frequent compared to other languages. All languages Odersky’s named are regularly dissed. This anti-Ruby-on-Rails rant alone has been much more widely publicized than all of the anti-Scala stuff put together.
Odersky is incredibly smart and accomplished. My point is the pervasiveness of selection bias, and the importance of being aware of it consciously. The quoted passages amazed me because I assumed someone in his position would know this.
I think if you read what he wrote less ungenerously (e.g. as if you were reading a mailing list post rather than something intended as a bulletproof philosophical argument), you’ll see that his implicit point—that he’s just talking about the reaction to Scala in particular—is clear enough, and—and this is the important point—the eventual discussion is productive in terms of bringing up ideas for making Scala more suitable for its intended audience. Given that his post inspired just the sort of discussion he was after, I do think you’re being a bit harsh on him.
I don’t know that we disagree. I will cheerfully agree that Martin’s email was relatively measured, the discussion it kicked off was productive, and that his tone was neither bitter nor toxic. That doesn’t detract from my point—that as far as I can make out, his perception of relative attack frequency is heavily selection-biased, and he’s unaware of this danger. It is true that in this case the bias did not lead to toxic consequences, but I never said it did. The bias itself here is remarkable.
If my being a bit harsh on him basically consists of my not saying the above in the original comment, I’ll accept that; I could’ve noted in passing that the discussion that resulted was at the end largely a friendly and productive one.
Yesterday I received the following message from user “admin” in my Less Wrong inbox:
But the link goes to a 404.
I got this, too. I was concerned that it was might not be what it claimed to be, and avoided clicking the link. I view with suspicion anything unexpected that points me anywhere I might reasonably input login details.
I got it too. I think it was a typo in the URL which should instead appear as your preferences page
Does that link actually work for you? If I enter my password, it briefly says “submitting” and the button moves to a different spot, but it doesn’t seem to create a wiki account.
This seems like the answer. Can one of the admins validate that this was the intended link?
That private message was part of a new feature to encourage wiki participation, by helping existing Less Wrong users onto wiki accounts. Unfortunately the link to create an account didn’t point to the right place.
If you tried to create a wiki account and had the brief flash of “submitting” (like Pfft), make sure you’ve got a validated email address associated with your account.
Got the message too, and that 404, then created an account with the form below my username.
Also did the same.
I got a similar / identical message. Anyone know what was up with that?
Ditto.
I got one as well. Link http://wiki.lesswrong.com/prefs/wikiaccount
Petition to the FDA not to ban home genomic kits like 23andMe. I recommend people here interested in personalized medicine, transhumanism, or have any libertarian bent consider reading and signing.
I added my own comment
The last line I added in part to aim at the current left-wing attitudes about personal bodily integrity. I stole the less well known disease from Yvain’s excellent letter here, where I got to find about yet one more fun disease potentially in my gene pool. I strongly recommend people read Yvain’s letter.
One piece of common wisdom on LW is that if you expect that receiving a piece of information will make you update your beliefs in a certain direction, you might as well update already instead of waiting. I happened to think of one exception: if you expect that something will cause a change in your beliefs when it shouldn’t, because it uses strong rhetorical techniques (e.g. highlighting highly unrepresentative examples) whose effect you can’t fully eliminate even when you know that they’re there.
(I have a feeling that this might have been discussed before, but I don’t remember where in that case.)
It’s more like, if you expect (in the statistical sense) that you will rationally update your beliefs in some direction upon receiving some piece of evidence, then your current probability assignments are incoherent, and you should update on pain of irrationality. It’s not just that you might as well update now instead of waiting. But this only applies if your expected future update is one that you rationally endorse. If you know that your future update will be irrational, that it is not going to be the appropriate response to the evidence presented, then your failure to update right now is not necessarily irrational. The proof of incoherence does not go through in this case.
This seems like a breakdown in reflective consistency. Shouldn’t you try to actively counter/avoid the expected irrationality pressure, instead of (irrationally and meekly) waiting for it to nudge your mind in a wrong direction? Is there a specific example that prompted your comment? I can think of some cases offhand. Say, you work at a failing company and you are required to attend an all-hands pep talk by the CEO, who wants to keep the employee morale up. There are multiple ways to avoid being swayed by rhetoric: not listening, writing down possible arguments and counter arguments in advance, listing the likely biases and fallacies the speaker will play on and making a point of identifying and writing them down in real time, etc.
No specific examples originally, but Yvain had a nice discussion about persuasive crackpot theories in his old blog (now friends-locked, but I think that sharing the below excerpt is okay), which seems like a good example:
As for trying to actively counter the effect of the misleading rhetoric, one can certainly try, but they should also keep in mind that we’re generally quite bad at this. E.g. while not exactly the same thing, this bit from Misinformation and its Correction seems relevant:
Sure, you should try to counter. But sometimes the costs of doing that are higher than the losses that will result from an incorrect belief.
This seems related, though not exactly what you are asking for.
There’s an intermediate step of believing things because you expect them to be true (rather than merely convincing). It’s fully corrected if you use correlates-to-truth over convincingness for the update.
In other words, if you expect the fifth column more if you see sabotage, and more if you don’t see sabotage, then you can reduce that into just expecting the fifth column more.
The phrenology guy isn’t showing up on the homepage for me. Did LW take him off?
That’s because the stylesheet link in the homepage is:
and that link should be to:
http://wiki.lesswrong.com/wiki/Lesswrong:Stylesheet
I’ve been teaching myself the basics of probability theory (I’m sixteen) but I’m having trouble on the first step. My basic definitions of probabilities are all frequentist, and I don’t know a good Bayesian source appropriate for a secondary school student. Is Jaynes’ PT:LOS able to be read by moi, given that I know basic set theory? If not, can anyone recommend a different textbook?
Jayne’s book probably requires a university undergraduate-level familiarity with probability theory to fully appreciate.
I’d say that for the time being you don’t need to worry about bayesianism vs. frequentism. Just learn the basics of probability theory and learn how to solve problems.
Thanks for being the one commenter who told me how tough the book is—I’m leaving it for now, and the below recommendation of ‘Understanding Uncertainty’ was very useful for understanding what a probability is. After that, I’ve got some basic probability textbooks waiting to go. Cheers.
It’s worth knowing that what Jaynes calls “probability” everyone else calls “statistics.”
Generally, “probability theory” means studying well-specified random models. In some sense this is frequentist, but in another sense the distinction does not apply. Whereas “statistics” is about subjective ignorance.
That terminology sounds strange to me.
I define statistics as a toolbox of methods to deal with uncertainty.
And simulation theory is kinda the opposite of statistics—whereas in statistics you deduce the distribution from sample data, in simulation you compute plausible sample data from a given distribution.
If you’re looking for an elementary introduction to Bayesian probability theory, I recommend Dennis Lindley’s Understanding Uncertainty. A lot more accessible than Jaynes, but not dumbed down. It’s informal, but it covers a number of quite sophisticated topics.
Lindley is one of the architects of the Bayesian conspiracy.
This recommendation has helped me out a lot, I might do a write-up of the book as a LW post at some point in the future. Thanks.
Given that PT:LOS is free online you can just try reading it. Even if you don’t understand all the maths (do you know some calculus?) you’ll still be able to read his verbal explanations of things, which will give you a good idea of the distinction between frequentist statistics and Bayes.
IIRC the version that’s online is not the same as the dead-tree version you can buy; the latter has extra material and bugfixes. (I do, none the less, think reading the online version is a good way for Benito to determine whether he finds it approachable.)
Indeed. (Although the dead-tree version doesn’t have that much extra material. It mostly just has the “—Much more here!!!--”″ notices deleted.)
A good way to find out would be to try reading it.
With math, it’s useful to be able to distinguish books you can’t understand because you’re missing prerequisite knowledge from books you can’t understand because you just aren’t reading them carefully enough. The prevailing wisdom seems to be that you can’t really expect to be able to follow Jaynes through if you pick it up as your first serious textbook on probability.
Agh. Please do not abuse English (or French) this way; what did they ever do to you? What you want to say is “Can I understand Jaynes’s PT:LOS?” This places the action where it belongs, with a human. A book is not “able to be read” by anyone. I am able to type, because I can perform the action of moving my fingers on the keyboard. Being read is not an action; consequently there is no such thing as “able to be read”. And even if there were, a book would not have that ability, because books do not perform actions. Additionally, that is one of the ugliest passive-voice constructs I’ve ever seen; and I’ve read quite a bit of unpublished academic writing. (And if you think the average journal article is awful, you should see what they’re like before the internal reviewers exercise their judgement, such as it is.) Finally, ‘moi’ for ‘me’ might have been archly funny or ironically pretentious the first two or three times it was used, in the sixties. The eighteen sixties.
Relevant SMBC.
Thumbs up to Benito for having the interest in these topics at that age. Rolf, why the rant against him? We should be encouraging to young people interested in rationality and bayesian probability.
Rolf’s comment is a fine example of the aphorism ‘praise should be delivered in public, criticism in private’. When I spot someone making a grammar error or formatting error or other minor error, I try to PM them rather than make a public comment. For two reasons:
People really don’t care, and a minor correction shouldn’t permanently clutter up comment threads. People reading Benito’s request for help don’t care whether people dislike the French. Yes, Rolf is right that it’s a little annoying and offputting. But if people don’t want to read his gratuitous use of French, they especially don’t want to read 5 or 15 comments debating it. So criticizing him with a public comment is wasteful of other peoples’ time.
Criticizing like that in public is especially likely to make someone slightly angry or to lash back or ignore it. So criticizing him with a public comment is less likely to accomplish the claimed goal of improving his writing.
I’ll note that, whilst I found Rolf’s comment mildly musing, it did not have a significant effect on the probability of me speaking like that in the future.
Gwern mentioning more in passing that it was a little annoying and off-putting, without being aggressive or rude about it, has affected me—I wasn’t aware it was either. I probably won’t use it again.
An interesting factoid. Drawing implications is left as an exercise for the reader.
″...for two decades, all the Minuteman nuclear missiles in the US used the same eight-digit numeric passcode: 00000000. … And while Secretary of Defense Robert McNamara directly oversaw the installation of PALs on the US-based ICBM arsenal, US Strategic Command generals almost immediately had the PAL codes all reset to 00000000 to ensure that the missiles were ready for use regardless of whether the president was available to give authorization.” (source)
duplicate. I’m surprised I can only find this one.
The original source is Bruce Blair, 2004, who has made related complaints since 1977. Supposedly Eric Schlosser’s book (2013) is an independent source. Luke quotes it at length here, but not about the zeros. The most common source is Steven Bellovin, who makes some historical remarks here more candidly than most accounts.
Looking for people older than me (I’m 26) to tell me their memories of what kind of nutrition messages they remember getting from Nutrition Authority Type People (USDA or whatever).
The reason I ask is because I read a bunch of Gary Taubes over the weekend, and at first glance his claims about what mainstream nutritionists have been saying strike me as… not what I’ve experienced, to put it mildly. In particular, the nutritiony stuff I learned as a kid was always pretty clear on sugary soda and snacks being bad for you. Charitable hypothesis: maybe mainstream nutrition messaging was much crazier in the 80s? I don’t actually think this is likely but I thought I’d ask.
I may be a bit older than you’re looking for (44, grew up in small town Indiana) but it just so happens I was back in the US for Thanksgiving and happened to discuss nutrition education with other members of my family.
All of the nutrition education I remember was structured in terms of the four main food groups: meat, dairy, grain, fruit & vegetables—focusing on the idea that these should all be represented in a balanced meal. We also were taught about nutritional content, mainly which vitamins are represented in which food groups (and which specific foods), but almost entirely separately from “meal planning”. This was hardly changed from the nutrition education my parents received some 20 years previously.… although not surprising as a few of the teachers were the same!
My younger siblings (38, 40) saw the introduction of the fifth food group, fats & sugars as I recall, presented as bad things that should be avoided. Also the presentation of the four food groups was somewhat altered, bringing nutritional balance (and the “recommended daily allowance”) a bit more to the forefront in meal design.
(All of the above is based on our memories of nutrition education which may be highly flawed!)
What does Taubes say mainstream nutritionists said?
That they recommended that people reduce their fat intake (which is definitely true) but then he tries to pin increased consumption of sugary crap on them (which is much less credible). For example:
It doesn’t sound like you’re being neutral on this.
“Sugary crap” is just shorthand for “the sugary stuff everyone agrees is bad for you.” The badness of e.g. sugary soda is pretty uncontroversial among nutritionists, “low-carb” or otherwise.
It was my impression that dieticians recommend avoiding processed sugar because of the lack of nutrients, thus making it easy for a diet high in processed sugar to have too many calories and not enough nutrients. Also, that in people with a genetic predisposition to insulin resistance, diets high in sugar have been shown to be correlated with developing insulin resistance and diabetes.
I have never seen a professional dietician refer to ‘sugary stuff’ as ‘bad for you’.
That terminology has always confused me. What, sucrose is not a nutrient? Why not?
Not to mention that this is talking apples and oranges—calories are a term from the physics-level description and nutrients are a term from the biochemistry-level description.
The correct word is micronutrients. Perhaps some people mistakenly interchange the words.
I doubt anyone’s confusing physics with biochemistry when they talk about these things.
Mass media uses “nutrients” in the sense of “a magical substance, akin to aether or flogiston, that makes you thin and healthy”. It is mostly generated by certificates of organic farming and is converted into its evil twin named “calories” by a variety of substances, e.g. anything connected to GMOs.
Ok. You clearly have a different kind of mass media there.
“It’s got electrolytes.”
You’re right that sucrose can indeed be considered a nutrient, but I’m just using the word to refer to essential nutrients i.e. molecular groups that you need to consume in your diet for the proper functioning of human biochemistry and cannot be substituted for anything else. As Nornagest says, these are vitamins, minerals, essential amino acids and essential fatty acids. Sucrose is not any of these so it is not an essential nutrient.
I don’t see why ‘comparing apples and oranges’ invalidates the argument, though. What difference does it make if they refer to different processes?
I also agree that nutrition is extremely contentious and politically charged.
Well, essential nutrients are a bit different thing, but even that doesn’t really help. The issue here is that there is an unstated underlying assumption that everyone needs all the essential nutrients and the more the better.
To give an example, iron is an essential nutrient. Without it you get anemia and eventually die. So, should I consume more of this essential nutrient? In my particular case, the answer happens to be no—I have a bit too much iron in my blood already.
Unsurprisingly, for many essential nutrients you can have too much as well as too little. And yet the conventional wisdom is that the more nutrients the better.
Human biochemistry is very complicated and all the public discourse about the diet can manage is Less calories! More nutrients! Ugh.
(yes, I know, I’m overstating things for dramatic effect :-P)
I agree with you that ‘more nutrients!’ is not sound advice, but again, I never said anything like that, not even implicitly.
Human biochemistry is indeed very complicated. That’s exactly why I responded to ChrisHallquist’s remark about ‘sugar being bad’, because I feel that that is vastly oversimplifying the issues at hand. For instance, simple sugars like fructose exist in fruit, and not necessarily in small amounts either. Yet I don’t think he would argue that you should avoid all fruit.
I am not arguing against you...
Well, ChristHallquist is reading Taubes and for Taubes insulin is the devil, along with the carbs leading to it :-/
What do you mean by small amounts? In the context of Taubes claiming that people are drinking soda because they don’t realize it’s unhealthy, this is the amount you’re comparing it with. (For comparison, that’s the amount in fruits.)
I once tried to plan a very simple diet consisting of as few foodstuffs as possible. Calculating the essential nutrient contents I quickly realized that’s not possible and it’s better to eat a little bit of everything to get what you need, unless of course, you take supplements.
Yes, that’s the idea behind Soylent but I’m rather sceptical of that concept.
Anyone else notice at least three of the soylent guys seem to have this unusual flush on their cheeks? Is this just sheer vitality glowing from them or could there be something else going on? :)
I’ve seen several pictures of Rob and his face seems to be constantly red.
Do you know if their Soylent recipe uses carrots or other pigmented vegetables? It could be an accumulation of the coloring. (This apparently happened to me as an infant with carrots. Made my face red/orangish.)
The early version contains carotenoids found in pigmented vegetables, at least lycopene found in tomatoes, and alpha-carotene found in carrots. It seems you’d get much less carotenoids from Soylent than just eating one tomato and one carrot per day.
He mentions “not very scientific, but the males in my family have always loved tomatoes.” Perhaps that’s the explanation and not Soylent, although you get three times less carotenoids from tomatoes compared to carrots so you’d probably have to eat ridiculous amounts of them to become red. Perhaps they love carrots too.
Early recipe, and practically speaking, I don’t know what the effects of one tomato & carrot a day would be! Rhinehart and the others have been on Soylent for, what, a year now? That’s a long time for stuff to slowly accumulate. Most people don’t eat a single vegetable that routinely. During the summer I eat 1 tomato a day (we grow ours) without glowing, but then I don’t eat any tomatoes during spring/winter, which is disanalogous.
I didn’t know that. Seems a likely explanation.
Does anyone actually think that the optimal amount of calories is zero and the optimal amount of nutrients is infinity? I haven’t seen many people taking a dozen multivitamins a day but otherwise fasting, so...
(If what they actually mean is that more people in the First World are eating more calories than optimal than fewer, and vice versa for certain essential nutrients, I’d guess they’re probably right.)
Then again, it’s hard for most people to think quantitatively rather than qualitatively, but that doesn’t seem to be a problem specific to nutrition.
It’s common for people to think that they (or others) should consume less calories and more nutrients. They generally stop thinking before the question of “how much more or less?” comes up.
And sometimes they are right.
True that, but that doesn’t seem to be specific to nutrition.
(That said, I am peeved by advice that assumes which way the listener is doing wrong, e.g. “watch less TV and read more books” rather than “don’t watch too much TV and read enough books”.)
Breatharians come close, but I guess the only nutrient they acknowledge is sunlight/vitamin D.
Um, no. Nutrients are things your body needs to function. Some, but not all, of them can be burned for calories. They can also be used for other things.
In this context, I’d take “nutrients” to refer loosely to the set of things other than food energy that we need to consider in diet: vitamins, dietary minerals (other than sodium, usually), certain amino acids and types of fat, and so forth. That doesn’t map all that closely to the biochemical definition of a nutrient, but I don’t expect too much from pop science, especially not in a field as contentious and politically charged as nutrition.
Oh, I don’t expect much from it at all, but unfortunately this terminology is pervasive and, IMHO, serves to confuse and confound thinking on this topic.
I’m sorry to say I expected more from you Chris. How do you reducing fat intake is definitely true? Especially when you’ve read Gary Taubes.
He means that it is definitely true that they were advocating reducing fat intake.
Wires crossed moment. Yes they were indeed, pity they were sooo wrong and that the word fat is conflated with a dietry meaning and a physiological energy storage meaning. In other words people hear “make me fat” when you mention fat and how one(me specifically) eats so much of it.
Peter Attia and Gary Taubes have set up NUSI to get some much needed science behind optimal diet.
Peter’s site http://eatingacademy.com/ has loads of cool data on his experience with the keto fyi.
It’s been up for quite a while but I haven’t noticed any progress. Is anything happening with it or it stalled?
Hmm I haven’t seen anything either, my bad. Tis a shame, I’m not aware of any other optimal diet science. Is there any?
Well, there are lot of claims for that :-/
That they advocated reducing fat intake and especially saturated fats, and encouraged grain and carbohydrate intake.
This sounds familiar to me. I’m 32 and I definitely remember hearing stuff like this. I remember in elementary school (so, late 80s early 90s) seeing the Canada food guide recommend a male adult eat something like up to 10 servings of grains a day, which could be bread or pasta or cereal. You were supposed to have some dairy products each day, maybe 2-4. And maybe 1-3 servings from Meat & Alternates.
I remember that pretty much all fat was viewed (popularly) with caution, at least until Udo Erasmus came out with his book Good Fat, Bad Fat.
But I do recall a clear message that soda and snacks were unhealthy. It wasn’t as though soda was thought ok just because it was low fat / high carb.
This may help… And that, too.
If someone decides to read these, a tiny summary would be nice.
I will likely end up doing so in an upcoming post, but I may not find time to write it for a few weeks.
Does he argue there was a change of opinion in the 80s or before that? If I recall correctly, he argues that the guidelines have remained roughly the same for decades, or even changed for worse.
Don’t have the quotes readily on-hand but basically, yes, he claims offical low-fat recommendations of 70s/80s were important.
I would like some feedback on a change I am considering in my use of some phrases.
I propose that journal articles be called “privately circulated manuscripts” and that “published articles” should be reserved for ones that be downloaded from the internet without subscription. A more mild version would be to adopt the term “public article” and just stop using “published article.”
I think that if you do this and few others do, the main result will be to confuse your readers or hearers—and of those who are confused, when you’ve explained I fear that a good fraction of those who didn’t already agree with you will pigeonhole you as a crank.
Which is a pity, because it would be good for far more published work to be universally accessible than presently is.
A possibly-better approach along similar lines would be to find some term that accurately but unflatteringly describes journals that are only accessible for pay (e.g., “restricted-access”) and use that when describing things published on such terms. That way you aren’t redefining anything, you aren’t saying anything incorrect, you’re just drawing attention to a real thing you find regrettable. You might or might not want a corresponding flattering term for the other side (e.g. “publicly accessible” or something). “There are three things worth reading on this topic. There’s a book by Smith, a restricted-access journal article by Jones, and a publicly-accessible paper by Black.”
You don’t think “privately circulated manuscript” is 100% accurate?
I think it’s pretty clear to say “a privately circulated article by Jones and a published paper by Black,” at least as long as I provide links. The ambiguity I’m concerned about is where my comment is very short; the typical situation is providing the public version to someone who cited the private version.
“Privately circulated” implies something that’s only available to a very small group and not widely available. This might be a fair characterization in the case of some very obscure journals, but we might reasonably expect that most of the universities in the world would have subscriptions to journals such as Nature. According to Wolfram Alpha, there are 160 million students in post-secondary education in the world, not including faculty or people at other places that might have an institutional subscription.
Even taking into account the fact that not all of “post-secondary education” includes universities but probably also includes more vocational institutions that likely don’t subscribe to scientific journals, we can probably expect the amount of people who have access to reasonably non-niche journals to be in the millions. That doesn’t really fit my understanding of “privately circulated”.
Would you consider Harry Potter not to have been published because it is not being given away for free? Why should “published articles” be defined differently from “published books”?
Everyone applies “published” differently to books and articles. In fact, most people use “published article” to mean “peer-reviewed article,” but even ignoring that there are pretty big differences.
Why did you choose to make this comment here, rather than in response to my original comment?
Like what?
No, I read “privately circulated” as distributed to a limited and mostly closed circle. If anyone with a few bucks can buy the paper, I wouldn’t call it “privately circulated”.
Exactly.
A word is just a label for an empirical cluster. It’s misleading to talk about “accurate” as though there were a binary definition.
The “manuscript” part certainly isn’t, since these things are generally typeset.
I choose libel.
as always a phrase being technically 100 percent correct has a lot less to do with whether it’s understood as intended than you might think. a privately circulated manuscript implies the protocols of the elders of zion to me.
gjm chose the word “accurate.”
Wouldn’t it be more practical to simply adopt a personal rule of jailbreaking (if necessary) any paper that you cite? I know this can be a lot of work since I do just this, but it does get easier as you develop the search skills and is much more useful to other people than an idiosyncratic personal vocabulary.
Any how-to-advice on jailbreaking? Do you just mean using subscriptions at your disposal?
I wonder if “pirating” papers has any real chance of adverse repercussions.
I think there have been past threads on this. The short story is Google Scholar, Google, your local university library, LW’s research help page, /r/Scholar, and the Wikipedia Resource Request page.
I have 678 PDFs on gwern.net alone, almost all pirated, and perhaps another 200 scattered among my various Dropboxes. These have been building up since 2009. Assuming linear growth, that’s something like 1,317 paper-years (
((678+200)/2)*3
) without any warning or legal trouble so far. By Laplace, that suggests a risk of trouble per paper-year of 0.076% (((1+0)/(1317+2)) * 100
). So, pretty small.There is no dichotomy. Word choice is largely independent of action. You set a good example, but you cite very few papers compared to your readers. Word choice to nudge your readers might have a larger effect. Do your readers even notice your example?
My question is how to get people to link to public versions, not how to get them to jailbreak. I think that when I offer them a public link it is a good opportunity to shame them. If I call it an “ungated” link, that makes it sound abnormal, a nice extra, but not the default. An issue not addressed by my proposal is how to tell people that google scholar exists. Maybe I should not provide direct links, but google scholar links. Not search links, but cluster links (“all 17 versions”), which might also be more stable than direct links.
I don’t know. I know they often praise my articles for being well-cited, but I don’t know if they would say the same thing were every citation a mere link to Pubmed.
If you just want to shame them, then there’s much more comprehensible choice of terms. For example, ‘useful’ or ‘usable’. “Here is a usable copy”—implying their default was useless.
Universities have a lot subsriptions so that their students can access journal articles for free, so “privately circulated” perhaps isn’t as accurate as you’d like to think. Journals can also be accessed from libraries.
Feel free to elaborate on your reasons and goals for this (beyond the obvious signaling).
What is the obvious signaling?
That you are the type of person who thinks that all research should be freely available and charging for access to scientific journals is morally wrong. (You likely also prefer Linux over Windows because MS is evil, but put up with Apple because it is cool.)
I had to double-check that you weren’t secretly RMS.
RMS? Try Nina Paley cf
But today I am not encouraging people to violate copyright, just to prefer links that work.
Is there a better expression for the “my enemy must be the friend of my other enemy” fallacy, or insistence on categorizing all your (political or ideological) opponents as facets of the same category?
Out-group homogeneity seems closely related, at least.
Semi related article (pdf link):
What Is the Enemy of My Enemy? Causes and Consequences of Imbalanced International Relations, 1816–2001
Abstract:
Recently found this paper, entitled “On the Cruelty of Really Teaching Computer Science” by Dijkstra (plaintext transcription here). It outlines ways in which computer programming had failed to (and still has) actually jump across the transformative-insight gap that led to the creation of the programmable computer. Probably relevant to many of this crowd, and very reminiscent of some common thoughts I’ve seen here related to AI design.
In the same place I found this paper discussed, there was mention of this site, which was recommended as teaching computer science in a way implementing Dijkstra’s suggestions and this textbook, similarly. I can’t vouch for them personally yet, but this might be an appropriate addition to the big list of textbooks.
Dijkstra’s ideas may be relevant to safety-critical domains (at least to some extent) but the article is flagrantly ignoring cost-benefit tradeoffs. Empirically we see that (manual) proof-oriented programming remains a small niche while test-driven programming has been very successful.
He’s certainly not ignoring cost-benefit tradeoffs. He acknowledges this as a perceived weak point, and claims that, when practiced properly, the tradeoff is illusory. (I rate this unlikely but possible, around 2% that it’s purely true and another ~20% that the cost increase is greatly exaggerated.)
I’m pretty sure Dijkstra would argue (and I’m inclined to agree) that proof-oriented programming hasn’t gotten a fair field test, since the field is taught in the test-driven paradigm and his proof-oriented teaching methods were never widely tried. There’s definitely some status quo bias at work; the critical question is whether Dijkstra’s methods would pass the reversal test, and if so how broadly. My intuition suggests “Yes, narrowly with positive outlook”; as we move toward having more and more information on cloud-computing servers and services and social networks, provably-secure computing seems likely to be appealing in increasingly broad applications, particularly when you look at large businesses wanting to reap the benefits of new technologies but very leery of the negative consequences of bugs.
And of course, even in the status quo, these methods still have relevance to anyone looking to make high-risk things like AI.
I would be skeptical of this claim, given how diverse the field of software engineering is, and many programmers are both self-taught and mathematically talented, so they would be prone to trying out neat things like proof-oriented programming even if mainstream schools only taught the test-driven paradigm. At the same time, many schools actually focus on teaching computer science instead of software engineering, taking a much more theoretical and mathematical approach than what most programmers will ever actually need. People coming from these backgrounds would also seem to be inclined to try out neat formal methods. (If they pursued an academic career, they could even do so without profitability concerns.)
Dijkstra’s general senitment seems to be that applying existed engineering practices from civil, mechanical, electrical, etc. engineering disciplines to computer science is woefully inadequate. With this, I agree. I also agree that there seems to be some weird set of beliefs in mathematical culture that the human brain is superior to a computer and that no computer could ever do mathematics like a human could (I’ve seen even prominent mathematicians use Godel’s theorem as bogus ‘evidence’ of this).
But the problem is that there doesn’t seem to be a viable alternative to the status quo of software engineering, not at the moment. The only type of radical new thinking that I am aware of is the functional programming approach to things taken by e.g. haskell. But there are a lot of issues there as well. So far, productivity has been far higher using the more traditional way of doing things.
I did some Googling after reading the article and found this book by Dijkstra and Scholten actually showing how a first-order language could be adapted to yield easy and teachable corectness proofs. That is actually amazing! I have a degree in CS and unfortunately I’ve never seen a formal specification system that could actually be implemented and not be just some almost-tautological mathematical logic, like so many systems that exist in the academia. Thanks very much for the link.
If you are interested in this kind of thing, you should check out Dafny. It’s a programming language with Hoare-logic style pre- and postconditions (and the underlying implementation computes weakest preconditions, Dijkstra-style). But what sets it apart is that it is backed by an automatic theorem prover (Z3) which is sufficiently powerful to handle most things that seem trivial to a human. To me Dafny feels like the promise of programming verification research in the 1970s finally came through: you can carry out program verification like you would with pen and paper, without being overwhelmed by finicky algebraic manipulations.
Mathematicians (and Dijkstra qualifies as one) have been bemoaning the lack rigour in undergraduate education for some time now. (Aye, even as early as the French vs. English trigonometry textbook debates of the 1800s.) The United States has a peculiar cultural mismatch between the relative quality of secondary and undergraduate education, which in my mind causes most of the drama. In particular, EWD1036 was written during Dijkstra’s career at UT Austin.
I’d like to know if this phenomena is global, though.
Is it just me, or is solipsism wrong?
It’s just you. But it amuses me that you think otherwise.
The talk Eliezer Yudkowsky held at Oxford (and the resulting discussion) are now online.
If the human race is down to 1000 people, what are the odds that it will continue and do well? I realize this is a nitpick—the argument would be the same if the human race were reduced to a million or ten million.
It’s an interesting question. The Toba Catastrophe Theory suggests that human population reached as low as 10,000 individuals during a climate change period linked to supervolcano eruption. Another theory suggests that human population reached as low as 2000 individuals. Overall I think 1000 individuals is enough genetic diversity that humans could recover reasonably well.
The real problem seems to me to be whether humans could ever catch up to where we are after being knocked down so low. Some people have suggested that if civilization collapses humanity won’t be able to start a new industrial revolution due to depleted deposits of oil and surface minerals.
Garbage dumps would have metal that’s more concentrated than you’d find it in ore. I’m not sure how much energy would be needed to refine it.
If I were writing science fiction, I think I’d have modest tech-level efforts at mining garbage dumps in coastal waters.
The History of the Next Ten Billion Years—a Stapledonian handling of the human future. Entertaining, though I think it underestimates human inventiveness.
Aluminum, in particular, is known for being very difficult to extract from ore, but once extracted, very easy to recycle into new products.
Oil (and coal, which is less topically sexy but historically more significant to industrialization) is the big problem, though rare earths and other materials that see use more in trace than in concentration could also be an issue. If you’re a medieval-level smith, you probably wouldn’t care too much whether you’re getting your Fe from bog iron nodules or from the melted skeletons of god-towers in the ruins of Ellae-that-Was, although certain types of bottleneck event could make the latter problematic for a time.
Still, I’d be willing to bet at even odds that that wouldn’t be a showstopper if it came to it.
On the other hand, these future humans would probably be able to learn things like science much more quickly because of all the information we have lying around everywhere.
Our information storage media has a surprisingly short shelf life. Optical disks of most types degrade within decades; magnetic media is more variable but even more fragile on average (see here and the linked pages). There are such things as archival disks, and a few really hardcore projects like HD-Rosetta, but they’re rare. And then there’s encryption and protocol confusion to take into account.
A couple centuries after a civilization-ending event, I’d estimate that most of the accessible information left would be on paper, and not a lot of that.
I don’t know. Those objects make certain kinds of superstitions seem much more plausible.
The audio cuts out partway through the talk.
I haven’t watched it all the way through, but I can jump into it at any point and it plays fine.
Audio cuts out at around 38 minutes, after that there is no sound from Eliezer’s mic, so it’s apparently relying on the camera mic which makes the recording noisy and hard to hear.
The audio is only gone for a minute or so, so while this is annoying it’s not major.
No, it cuts out completely for a minute, and then it apparently switches to the camera mic, which makes Eliezer very hard to hear over noise.
He’s a better public speaker than i expected.
LW meta (reposted, because a current open thread did not exist then): I have received a message from “admin”:
I have seen, indeed, options to create a wiki account. But I already have one; how do I associate the existing accounts?
A related question: I clicked the (modified) URL that “admin” sent me, and the page contained a form where I could fill in my LW password in order to create a wiki account. I submitted it but I cannot login on the wiki with my LW credentials. What’s going on?
It looks like the Sheep Marketplace is done since a major heist for its bitcoins took place. At least one part of this prediction worked out.
Today I skim-read Special Branch (1972), the first book-length examination of Good’s “ultra-intelligent machine.”
It is presented in the form of a 94-page dialogue, and the author (Stefan Themerson) is clearly not a computer scientist nor an analytic philosopher. So the book is largely a waste of attempted “analysis.” But because I’m interested in how ideas develop over time and across minds, I’ll share some pieces of the dialogue here.
A detective superintendent from “special branch,” named Watson, meets up with the author (the dialogue is written in first person), and explains that a team is building Good’s ultraintelligent machine. They both refer to the machine with female pronouns, and apparently “she” will be an odd machine indeed (p. 25):
Soon, the author gives some pieces of advice to those making the ultraintelligent machine:
No ought-arguments should be built into the machine (p. 26). “As she is a logical machine, it’s obvious that you can’t feed any ought-arguments into her. Because there is no logical argument to tell her why one ought not to kill or cheat or oppress or tyrannize.”
Don’t put any beliefs into the machine (p. 29).
Don’t let the machine read Plato first (p. 59).
After much further discussion, the book ends with a scene after the ultraintelligent machine has been built (p. 93):
Make sure you use the tag “open_thread” so that it will show up in the latest open thread on the sidebar.
Here are two (correct) arguments that are highly analogous.
Brownian motion, the fact that a particle in water or air does not come to rest, but dances at a minimal rate is an important piece of evidence for the atomic hypothesis. Indeed, Leucippus and Democritus are said to have derived the atomic hypothesis from Brownian motion; certainly Lucretius provided it as evidence.
Similarly, Darwin worried that “blending” inheritance would destroy variation in quantitative traits. He failed to reach the conclusion that heredity should be discrete, though.
I wonder if there is an analogous argument having to do with markets (or other social systems) and what an “equilibrium” looks like.
I’m planning to run a rationality-friendly table-top roleplaying game over IRC and am soliciting players.
The system is Unknown Armies, a game of postmodern magic set in a creepier, weirder version of our own world. Expect to investigate crimes, decipher the methods behind occult rituals, interpret symbols, and slowly go mad. This particular game will follow the misadventures of a group of fast food employees working for an occult cabal (well, more like a mailing list) that wants to make the world a better place.
Sessions will be 3-4 hours once a week over IRC or google hangouts or skype or whatever people are most comfortable with. Slots for two-three players, email me at sburnstein@gmail.com if you’re interested or if I can answer any questions about the game.
Is there a name for the halo effect of words? There should be because one example of this is"Overdraft Protection".EDIT: I am specifically referring to Debit Card Overdraft
protectionserviceEDIT 2: I have been made aware that I am using the wrong term, overdraft service is the term most commonly used by major banks to refer to the “service” they offer on debit card overdrafts. If you see me refer to something as Overdraft Protection please assume I am referring to the service banks give you on debit card use
If you are from the States I am willing to bet that you have opened a bank account at some point in your life and was presented with the option to have Overdraft Protection. Say No. For most people saying no is the right answer. I think many people when asked about this on the spot don’t have enough time to think through what Overdraft Protection really is. Just because someone decided to name something “Protection” doesn’t mean it protects you from anything. It might even feel silly to opt-out of something that is offered for “free”, which is why I think a lot of people fall for this poor decision. Let me explain why you should opt-out.
If you pay for something that you do not have the funds to pay for, the bank will lend you the money or help you transfer the money from a linked account to cover your purchase. They charge anywhere from $12 to $34 dollars or more for this service. Chase is a major bank and they charge $34. If for example you forgot to deposit your paycheck and bought a $3 latte with only $1 in your checking account, Chase will “protect” you from having the purchase be declined for a fee of $34.
If you knew that you didn’t have enough money would you agree to pay $34 for chase to loan you the money? The answer is no. You would rather have your purchase be declined. There is no fee to being declined a purchase. In fact the real protection is to be declined the purchase and not borrow money at insane amounts of interest.
These fees stack per transaction. Most people are hit with fees because they were not aware they were lacking the funds, this means that usually multiple transactions are made the same day thinking everything is OK. So if you buy a latte for breakfast, lunch and dinner, Chase will charge you $102 …because you know, they are protection you from the embarrassment of being declined. Lucky you.
Too many people have Overdraft protection when they don’t need it and the problem isn’t that most people are to stupid to do simple math, it’s because they never really thought about the implications. They were rushed into agreeing to something with out thinking about it. Well, now you have thought about it, so you don’t have an excuse. If you don’t need overdraft protection, go and opt-out now. Please avoid the trap of thinking that you don’t ever overdraft so it doesn’t matter, that is a bad decision. Even if it were true that you rarely overdraft, why would you deliberately keep a potential landmine of fees under your feet?
When is overdraft protection appropriate? Very rarely it can come in handy when writing important checks for a mortgage or loan other than that, most people do not use checks to pay bills any more. I used to be a poor university student and as such we are the prime targets of these bank scams, so get smart and get rid of it today.
I bank with Chase, and unless the written information I’ve received from them is a straight-up lie (which would put them at risk for a lawsuit...) this information is factually inaccurate. What you describe as “overdraft protection” is actually the policies you’ll be subjected to without overdraft protection. Overdraft protection does come with fees, but they’re much less, no more than $10 a day.
(The moral of the story: don’t be overdrawn. It will cost you money in fees with or without overdraft protection.)
The confusion stems from the fact that there are two different services for Chase, one for check writing and one for debit cards. I am specifically talking about debit card usage and will edit my post to make it more clear.
Chase will charge you $10 per day for check writing overdraft protection on accounts that are linked, this is true. However for debit card use, you would be charge $0 if you opt-out and indeed pay $34 per transaction if you opt-in. The problem is that many banks combine checking and debit card usage in to one plan, and others like Chase split it up. My main point is that check writing is becoming very rare, and most people get dinged from fees using their debit cards. So if they are combined and you really don’t write checks, then you definitely should opt-out.
There is $34 fee for debit card overdraft protection and $0 fee for opting out(here and here). Does this resolve your disagreement?
If you opt-out of debit card overdraft protection it will not cost you any money! If you opt-in for debit card overdraft protection it will cost you money. I know it sounds ridiculous, because it is.
Based on the links, Chase doens’t even call their service for debit cards “overdraft protection,” so this doesn’t support the original point about words misleading people. Also, it seems that if you have debit card coverage and overdraft protection, you’ll at most be changed $10/day for overdrawing with your debit card. Still better to use a credit card when you don’t have money in your checking account, obviously.
(Also, as Louie Helm recently pointed out, as long as you pay your balance in full every month, you’re better off using your credit card for everything because the rewards program will reduce the cost of everything you buy by 1% or more.)
In the spirit of being helpful and trying to be as factually accurate as possible, I have edited my original post, as you are absolutely are correct about the terminology. I would only argue that I consider my original point to be merely a segway to introduce my main argument that debit card overdraft services are typically poor decisions.
I do not believe this is accurate.
However assume it is accurate, if you weigh the cost/benefit (again talking about debit card use) it IMO is still a terrible investment. My bank happens to be Wells Fargo and they charge $12, for debit OD service, better, but still pretty bad. But ultimately you must decide what is an acceptable fee. The vast majority of people getting dinged for debit card overdrafts, are not buying life saving medication, its more likely to be a cup of coffee or a hot dog. So if you asked them what they would have done if they knew they had insufficient funds, they would likely reject the $10 or $34 fee. This isn’t even considering that most banks are not obligated to tell you that you are overdrawn, so you could get dinged $10 a day until you finally realize it, as opposed to being notified right away from being declined. BTW since you’re a Chase customer Chase happens to waive the fee if you can fund your account by day end, but they aren’t obligated to inform you that you are negative.
You’re better off using your credit card and saying no to debit card overdraft service – for the most part. Unless you frequently find yourself in the position of having your purchases must go though for what ever reason.
Also, use financial institutions whose incentives are better-aligned with the interests of their depositors; notably, credit unions.
Oh wheee, this is what I worked on in DC. There are a few different things that can happen when you try to make a purchase on a debit card with insufficient funds:
the merchant sees you don’t have the money, the card is declined, and you pay the bank nothing
the bank transfers money from a linked account (usually a savings account or line of credit) and charges a fee for this service (median $10, at least back in 2012)
the bank covers the cost of the purchase, which you now need to pay back, along with a fee of (at median) $35
Both the second and third option are sometimes called Overdraft Protection. There is no industry standard term, so it can be very hard to contrast between banks and disambiguate overdrafts covered by a transfer and regular overdrafts. (You can see the 14 different terms we found across 24 banks and credit unions here).
The law changed recently (in the last 5 years) so that banks have to ask you to opt-in to overdraft. If you take no action, when you try to buy something with your debit card you don’t have the money to cover, you just can’t do it, and you incur no fee. So, banks have done a big push to get people to opt-in, including using the “Overdraft Protection” language, but, for most people, it’s a bad choice.
And, fun fact, some banks reorder your purchases, when they’re processed, in order to maximize the number of overdrafts you incur. (i.e. if you had $20 in you account and bought, in order, items costing $5, $5, $5, $20, some banks reorder your purchases high-to-low so they can charge three overdraft fees instead of one). You can see a graphic with data from a real world case here.
Fun Fact, if you overdraw and are protected by a bank transfer from a linked account but that linked account is also insufficient, you get charged both fees – one fee for the transfer, and another for not having enough after the transfer! How can they justify this? Easy, the fee for just a transfer, not guaranteeing you that your transfer will be adequate.
Last month I signed up for a bank account at my local credit union, and they do offer overdraft protection of various sorts. One of the things that impressed me was that the woman who was setting up my account explained to me why I did not want overdraft protection, using a very similar example.
I cannot speak for all Banks policies, but that isn’t how the ‘overdraft protection’ on my account works. How mine (actually a credit union, maybe thats a difference) works is:
Without it, if I was to write a check with insufficient funds, I would get charged some large fee. But with the Overdraft Protection, it will transfer money from my savings account to checking to cover it, for free, helping me avoid the fee. Essentially it lets me use the savings accounts as a safety net to avoid the charges.
This ‘protection’ has in fact saved me in a couple of instances.
UK banks lost a test case a few years ago that led to a lot of people getting back however many years of overdraft charges, plus interest. The same thing happened a bit later with “payment protection insurance”, intended to cover loan repayments if you lost your job, but with so many exclusions as to be almost worthless.
The end result was something like a forced savings policy. Cue people who avoided the initial trap wondering where their free money is.
You have to wonder sometimes.
Computer programs which maximize entropy show intelligent behavior.
Kevin Kelly linked to it, which means it might make sense, but I’m not sure.
It sounds like Prigogine (energy moving through a system causes local organization), but I’m not sure about Prigogine, either.
I’m curious about this, and specifically what’s meant by this “decoupling”. Anyone have a link to research about that?
It sounds somewhat like “financial AIs are paperclipping the economy” or possibly “financial AIs are wireheading themselves”, or both. If either is true, that means my previous worries about unfriendly profit-optimizers were crediting the financial AIs with too much concern for their owners’ interests.
Previously: http://lesswrong.com/lw/h96/link_causal_entropic_forces/ http://lesswrong.com/lw/h7r/open_thread_april_1530_2013/8th1
Louie on G+ links an interesting pair of philosophy papers: http://plus.google.com/104557909419304580033/posts/jNdsspkqGH8 - An attempt to examine the argument from disagreement (’no two people seem able to agree on anything in ethics’) by using computer simulations of belief convergence. Might be interesting reading.
http://dl.dropboxusercontent.com/u/243666993/2010-gustafsson.pdf
http://dl.dropboxusercontent.com/u/243666993/2012-vallinder.pdf
There are a couple of commercially available home eeg sets available now, has anyone tried them? Are they useful tools for self monitoring mental states?
I’ve been diagnosed with avoidant personality disorder and obsessive compulsive personality disorder, as well as major depression, about 4 months ago, and even though my depression has been drastically reduced by medication, I still often have suicidal thoughts. Does anyone have advice on dealing with this? It’s just hard to cope with feeling like I’m someone that it isn’t good or healthy to be around.
Lots of people enjoy hanging out with my me despite my occasional suicidal ideation! Most people can’t read your mind!
naive question (if you don’t mind): What sort of things trigger your self-deprecating feelings, or are they spontaneous? E.g. can you avoid them or change circumstances a bit to mitigate them?
The prospect of social interaction, whether it actually happens or not, can trigger it. Any time I start a project (including assignments at university), go back to edit something, and it doesn’t meet my standards, I get quite severe self-deprecating feelings.
For the second one I managed to mitigate it by changing my working process to something more iterative and focused on meeting the minimum requirements before optimizing. I still have not found a remotely serviceable solution for the social interaction problems, and the feedback loops there are more destructive too. At least with the perfectionism problem I can move to another project to help restore some of my self-esteem.
It’s easy to be sympathetic with these two scenarios—I get frustrated with myself, often enough. Would it be helpful to discuss an example of what your thoughts are before a social interaction or in one of the feedback loops? I’m not really sure how I’d be able to help, though… Maybe your thoughts are thoughts like anyone would have: “shoot! I shouldn’t have said it that way, now they’ll think...” but with more extreme emotions. If so, my (naive) suggestion would be something like meditation toward the goal of being able to observe that you are having a certain thought/reaction but not identify with it.
Evolution in humans does not work to produce an integrated intellectual system but produces the set of hacks better suited to the ancestral environment than any other human. Thus we should expect the average human brain to have quite insular but malleable capabilites. Indeed I have the impression that old arts like music try to repurpose those specific pathways in novel ways. Are there parts of our brains we can easily repurpose to aid in our quest for rationality?
I have a notion we aren’t just adapted to the ancestral environment—we’ve also got adaptations for low-tech agriculture (diligence, respect for authority) and cities (tolerance for noise, crowding, and strangers). Neither list is intended to be complete.
I’ve wondered whether people in separatist/supremacist movements have fewer city genes than average.
Great point, we have adaptions for a multitude of environments.
Still, the question is, can we repurpose some of those little adaptions for rationality or finetune them by some technique?
You mean like imagining you’re going to present an issue to an authority figure when thinking about it? Or something more wacky like converting reasoning problems into visual problems?
Either. The first will be lower hanging fruit, the second will be much more non-obvious.
My point, I think, is that most of the stuff on lesswrong is on the theoretical side of things but quite impractical.
I am trying to find a post here and am unable to find it because I do not seem to have the right keywords.
It was about how the rational debate tradition, reason, universities, etc. arose in some sort of limited context, and how the vast majority of people are not trained in that tradition and tend to have emotional and irrational ways of arguing/discussing and that it seems to be the human norm. It was not specifically in a post about females, although some of the comments probably addressed gender distributions.
I read this post definitely at least six months and probably over a year ago. Can anyone help me?
Probably not what you’re after, but there’s Making Rationality General-Interest by Swimmer963. Further out but with a little overlap with what you describe: Of Gender and Rationality by Eliezer. Or No Safe Defense, Not Even Science by Eliezer.
I think it’s less than 25% probable that any of these is what you’re after, but (1) looking at them might sharpen your recollection of what you are after, (2) one or more might be a usable substitute for whatever your purpose is, and (3) others reading your comment and wanting to help now needn’t check those :-).
“No Safe Defense, Not Even Science” is close enough for the purpose I was using it for. Thank you!
You’re welcome.
Someone led me to Emotional Baggage Check. The idea appears to be that people can leave an explanation of what’s troubling them, or respond to other people’s issues with music or words of encouragement. It sounds like a good idea (the current popular strategy of whining on a public forum seems to be more trouble than it’s worth). It doesn’t look particularly troll-proof, though.
If nothing else, I’d like to look at them in a year or so and see how it’s turned out.
Hey dude! I am the creator of that site. Hm yeah we are working on the whole troll-proof thing. Actually have a whole alert system set up but we still have more to work on. And yeah I’m pretty interested to see where we will be in a year too. Stay tuned.
Can someone change the front page so it doesn’t say “Lesswrong:Homepage”? This sounds like it is a website from 1995. Almost any other plausible wording would be better.
The graphic appears to be broken.
Directed Technological Change and Resources
http://whynationsfail.com/blog/2013/11/26/directed-technological-change-and-resources.html
“Temporary interventions are sufficient to redirect technological change...”