Open thread, Dec. 15 - Dec. 21, 2014
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
I am looking to set a morning routine for myself and wanted to hear if you have some unusual component in your morning routine other people might benefit from.
One thing I might start experimenting with is a version of morning contemplation. Ancient stoicism seems to suggest to reflect on one’s principles in the morning, christian tradition has morning prayers and Benjamin Franklin reviewed his virtues every morning, so why not do a little personalised version of it? Things like the serenity prayer or Tarski’s litany.
I have a terrible problem where I wake up from my alarm, turn off the alarm, then go back to sleep (I’ve missed several morning lectures this way). The solution I’ve been trialing is to put a glass of water and some caffeine pills on my bedside table when I go to sleep. That way, when I wake up I can turn off the alarm, take the pill and give in to the urge to put my head back on the pillow, confident that the caffeine will wake me up again a few minutes later. This has worked every time I’ve remembered to put out the pills.
I got this idea from someone else on LW but I’ve forgotten who, so credit to whomever it was.
My preferred solution for this problem is to have the alarm on the other side of the room so I have to actively get out of bed and walk over to it.
I tried that before, I’d just turn it off and get back into bed.
Hmm, here’s another solution which a friend of mine tried which seems to require careful timing- set multiple alarms and have each alarm set to go off a few seconds after the other, having them lead you all the way around the room.
In general, I dislike posting comments that just express agreement without any content, but this is such an great idea that I just had to pop in and say, “That’s awesome.” So, without further ado:
That’s awesome.
It also happened to me. I used to wake up at 7:15, snoozing up until 7:30, which was very bad for my morning routine. Putting the alarm clock on the other side of the room helped, but when I tried to set the waking time at 7:00, I would just get up, snooze and go back to bed. Now I’m trying to set it to 7:10 and it seems to work well.
There probably is a subconscious set point which needs to be adjusted gradually. You also need stuff to do when you wake up: if you wake up just for the sake of it, your subconscious will know and put you back to sleep.
I would too, so I replaced my alarm with a cron job on my computer to play music at a certain time—usually an Insane Clown Posse song (specifically Vultures, which starts loudly and distinctively enough to work well structurally as an alarm), to make sure I wouldn’t stay in bed and listen.
Hard mode version: instead of music, use a cron job that will start deleting all your files if you don’t answer a prompt within the space of a minute.
Then one day you’re ill, or in the shower, or you go away on business and forget to disable the cron job, and BOOM no more files.
As soon as I close my eyes to sleep I silently say to myself “I will wake up early in the morning” 100 times. If I do this I will wake up before my alarm without fail, if not I will hit snooze as many times as I can get away with it.
Have you ever learned (self-)hypnosis? I tried something quite close to this technique (repeat a phrase or short set of phrases as many times as I could, while intentionally relaxing as though into hypnosis but without the “and at the end of it I wake up again” key) after I learned self-hypnosis, and it achieved things I would not have thought were possible. Getting up without lazing in bed is interesting and handy, but programming my brain to wake up at 4AM (when you went to bed at 11PM or so) without an alarm of any kind was just freaky. I could hit within five minutes or so of my target time (it may help that I tended to fall asleep such that the last thing I saw before falling asleep was the time on my alarm clock) and I would wake up alert, enough so that it felt natural to get out of bed.
Of course, being roughly 13 at the time, I didn’t use this power for anything particularly valuable—usually just to sneak downstairs and play more StarCraft than I was allowed to play mid-week, then sneak back upstairs and into bed before my alarm went off for school—but the fact that I could do it still astonishes me. It’s a lot less reliable now, either because I’ve long since fallen out of practice with self-hypnosis or because my sleep schedule is now a lot more irregular but arguably less flexible to start with.
I should start practicing again… maybe even see if I can find my hypnosis lesson tapes (and something that will play audiocassette) for a reminder.
“I will wake up, alert and feeling rested, at 7:45 in the morning and immediately get out of bed. I will wake up, alert and feeling rested, at 7:45 in the morning and immediately get out of bed. I will wake...”
I have this ability when my sleep is regular enough and I’m somewhat aware of the time when I fall asleep. I think you probably could have done it without hypnosis, and the relaxation was the key, so that you fell asleep reliably and quickly.
I’ve got the self-timing ability (or at least did, I haven’t tested it lately) including being able to decide when I want to be out of a store—I can (or could) wander around, looking at whatever I pleased, and be out of the store within a minute or two of when I planned.
My ability kicked in when I’d been watching a lot of tv (not on a computer), so I had a chance to learn what a half hour was.
I consider the ability to monitor clock time unconsciously to be very mysterious.
Huh, I never tried using it for anything except sleep (or rather, waking). For me it’s quite literally an unconscious thing—the effect didn’t work if something prevented me from going to sleep while/after I did it (the idea being to repeat the sentences until I’m asleep) - but I don’t see any reason it shouldn’t be usable while awake. Now I’m curious and want to try it; thanks for the idea!
Note that my waking use of it is still unconscious. I don’t know that I could say what time it is, and I don’t have awareness that it’s getting close to the time I want to leave the store. I just happen to be done looking at things at the time when I wanted to be out of the store.
I set two alarms, about 1-2 minutes apart. The first one is soft and beside my bed, the second is loud and on the other side of the room. I get a couple minutes to achieve consciousness in a civilized fashion, and then the second alarm actually gets me out of bed. I’ve previously tried just having the second, but I got very good very fast at turning it off without really gaining consciousness.
The hack is due to Anders Sandberg, with modafinil tablets though [ETA: this last part is false, see Kaj’s reply]. Works wonderfully (whether with modafinil or caffeine).
Historical note: his original blog post on it specifically used caffeine pills.
Huh, thanks. Not sure how I managed to misremember so specifically. Edited post.
I’ve cultivated a habit of following to the letter any deliberate precommitments I make to myself, because it seemed like a useful sort of habit, and I haven’t failed yet. I might even choose to make that part of my identity in time...
With that already in place for a few months, over the last few weeks I’ve noticed that I have woken (and stayed) up in time iff I went to sleep in time (about half the time) or used the precommitment of, freely translated, “When I next wake up, if I notice that my ringing mobile phone has the piece of duct tape stuck to it that I put there to trigger the memory of this precommitment, and remember this precommitment, I will not touch my mattress after turning it off for the following 10 minutes.” (about half of the remaining times)
You see, my morning self doesn’t actually want to sleep in, it just apparently hasn’t internalized induction on enough of a gut level to draw the connection between going back to bed and falling back asleep. What it does understand, is the connection between touching that mattress and tarnishing that record.
Edit: It now occurs to me that it might be a nonobvious warning that you carefully start with lesser precommitments, lest you ruin this forever for yourself by having an akrasia’d self not care enough about the habit. Your reach exceeds your grasp by default on this one.
Edit: Shit, I actually just went back to bed after those 10 minutes. Gonna have to modify that...
I am not a morning person and never have been. What I will do is if I really have to get up I will set three alarms: 1 on my iPad that is next to me in bed. Then I will place my iPhone across the room and set two alarms on it. Not only do I get residual alarms but I also get the getting out of bed effect. My Dad has a similar system and it has served him well since his military days.
I exercise for 5 minutes within 5 minutes of getting up.
Before I did this, I sometimes had the habit of checking e-mail for like an hour before getting out of bed. After adding this to my routine, I never did that again.
Yes, when I was in high school, I would do about 20 push ups as soon as I got out of bed. After that, my heartrate is a bit too high to go back to bed. Of course, after a while the 20 push ups would get easy so I would increase the number until I was doing about 50-70.
I think the best thing is to start implementing it as soon as you get up, to not give yourself time/opportunity to sabotage it. Also, get up as soon as you wake up.
This may be the biggest causative factor of how my day goes (though it is itself subject to causes such as when I went to bed).
Don’t neglect this.
I forget who suggested it, but I actually needed 2 alarm clocks to get my sleep schedule under control.
One wakes me up. It is across the room, on my computer chair. When I goes off I move it up to the shelf. Something about doing a task makes me loathe to go back to bed.
One makes me go to sleep. When it goes off, I stand up and put it on the computer chair. That way I can’t keep browsing the net, because I’m stuck standing up.
This was only necessary for like a week. Once I was going to bed at the same time every day I didn’t need alarms to wake up then. Nowadays I wake up almost automatically.
The key for me was going to bed at a set hour every day.
I have gotten myself a manual coffee grinder. Using this device it takes me about 4 minutes to grind enough beans for a large Melitta pot. Attending to this repetitive task gives me a small but regular block of time during which I find it easy to practice mindfullness. And when I am done, I make a pot of coffee.
I get my shower the night to keep my bed clean and reduce morning prep time, allowing me to wake up later. Then just wet my hair with water in the morning for combing. I also have an Ensure as soon as I wake up, even if I’m going to have a more substantial breakfast shortly thereafter, because otherwise I’m a zombie.
I don’t know how unusual these are, but some of the components of my routine are:
putting the alarm on the other side of the room, so I get sure I don’t snooze;
workout with a 10 min torcher;
cold shower after workout;
a brief five minutes meditation before leaving.
Shower first. Then make breakfast/coffee.
One part is writing down whatever dreams I can remember right upon awaking. This has led to me occasionally experiencing lucid dreams without really trying.
Also since I am writing dreams anyway, this makes it easy to do the other writing which I find beneficial. Namely, writing the major plan of the day and gratitude stuff.
You might be interested in this, if you haven’t already seen it.
[Meta] I often see threads like this where people recommend things that require a very high level of conscientiousness or planning ability to start with, (e.g. if you are tired in the mornings get out of bed immediately and do x, requirs you to be capable of forcing yourself to do x when you are tired.)
I think you are mistaken about what is easy and difficult. Most of these are about dealing with lack of willpower, suggesting that the authors found something that was easier than it looked. Most of them don’t compare before and after, but jsteinhardt’s does.
In the newest episode of Person of Interest there is explicit mention of Friendly Artificial Intelligence, and the main character seems have an attittude towards AI similar to the one found on LessWrong. The exact quote is: “Even if I had succeeded in creating a benevolent machine, as if any such thing could exist, never forget that even a so-called ‘Friendly Artificial Super Intelligence’ would be every bit as dangerous as an unfriendly one.”
Since the LessWrongWiki is the second google hit (at least for me) under both “Friendly Artificial Intelligence” and “Friendly Artificial Super Intelligence” (the first being this wikipedia page, which explicitly mentions Eliezer in the second sentence) this might lead to an increased amount of visitors? Also it should be noted that in this series almost everything said about artificial intelligence is compatible with the ideas found on LessWrong (sometimes tweaked a little bit to make the series more interesting) - this series might well promote some beliefs on AI from LessWrong to the attention of a broader audience.
Yeah, I watch this series and this season in particular makes me feel like the writers read LessWrong.
Like any technical subject I’m familiar with, I get a little weirded out by the treatment it receives in the mass media as there’s always little (or big) bits that are just dumb for the stories sake, but Person of Interest seems to do a better job talking about AI than most shows do talking about whatever other technical subject.
For a family gift exchange game, I am giving a gift of the form: “25 dollars will be donated to a charity of you choice from this list”
Please help me form that list. My goal is to make the list feel as diverse as possible, so it feels like there is a meaningful decision, without sacrificing very much effectiveness.
My current plan is to take the 8 charities on givewell’s top charities list, and remove a couple that have the same missions as other charities on the list. Maybe adding MIRI or other Xrisk charities (give me ideas) that are very difficult to compare effectiveness with the givewell charities.
What would you put on the list?
What magazines would a LW member probably want to read/subscribe to?
My subscriptions: GQ Vogue Popular Science Time New York Times Vanity Fair Rolling Stone
For those who are intending to have children A recent study shows a substantial drop in IQ of children who were exposed to phthalates in the womb, and this pattern lasts at least until age 7. This is the first such study, so it may not turn out this way, but it seems worth noting, and it underscores how much low-hanging fruit there may still be in terms of environmental impacts on intelligence (similar to what happened with iodine in the 1920s and 30s).
Is there a way to get my LW notifications to be forwarded to my email?
I use Feedly + IFTTT for things like this. Try here or here.
How do you access the inbox RSS feed? As far as I can tell there’s no way for the feed reader to authenticate to access your LW account.
You may be right. I don’t use Feedly for that particular RSS feed, just similar feeds.
Do we have a catalog of Not Less Wrong rationality guides?
I know we have the list of rationality blogs, but I’m asking about a collection of material that educates at an entry level of formalized rationality but sits at lower inferential distances that the sequences.
I’m not aware of such a catalog but there are definitely resources that fit into this category. One good one is You Are Not So Smart.
I haven’t looked into it much myself, but a couple of people have mentioned RibbonFarm as being something like that.
I find it annoying that there is no way to view all the posts at once—you can see either main or discussion posts, but not both at once. I think it would be good to add “All” next to the tabs for Main and Discussion.
There is an undocumented way: all posts, all comments. But those URLs are not linked from anywhere.
Thanks! I’ll just add those to the browser as bookmarks.
When you go to GiveWell’s Donate page, one of the questions is,
And you can choose the options:
Grants to recommended charities
Unrestricted donation
I notice I’m reluctant to pick “Unrestricted,” fearing my donation might be “wasted” on GiveWell’s operations, instead of going right to the charity. But that seems kind of strange. Choosing “Unrestricted” gives GiveWell strictly more options than choosing “Grants to recommended charities” because “Unrestricted” allows them to use the money either for their own operations, or just send it to the charities anyway. So as long as I trust GiveWell’s decision-making process, “Unrestricted” is the best choice. And I presumably do trust GiveWell’s decision-making, since I’m giving away some money based on their say-so. But I’m nevertheless inclined to hit “Grants to recommended charities,” despite, like, mathematical proof that that’s not the best option.
Can we talk about this a little? How can I get less confused?
I wouldn’t make a restricted donation to a charity unless there was a cause I really cared about but I didn’t think the charity behind it was well-run and I didn’t know a better way of helping that cause.
I do not consider money to keep a good charity running as “wasted”—if anything I am deeply dubious of any charity which claims to have minimal to no administration costs, because it’s either untrue (the resources to manage it effectively must come from somewhere, maybe from the founders’ own personal resources) or a likely sign of bad management (they think that skimping on the funds needed to manage it effectively in the name of maximizing the basket of “program expenses” is a good organizational strategy). An organization that I think is well-run wants to spend on its cause as much as possible, but is mindful of needing to spend on itself also. If it cannot spend on itself—to hire good staff, to have good training, to use resources that cost money and save time, to plan its strategy and maintain regulatory compliance, to do whatever else an efficient organization needs to do—how can it possibly have the capacity to spend well on its programs? The money to sustain that charity is providing for its cause to be effectively addressed now and into the future.
“Unrestricted” says that you believe GiveWell is competent to make these allocations correctly between itself and its recommended charities. For GiveWell in particular, if you do not believe they can do this, why do you think they can evaluate other charities’ effectiveness? Presumably you want to give to the other charities because GiveWell has told you they are worth it, because you think GiveWell is competent at assessing organizational effectiveness. (For other charities, I would have lower expectations for assessment ability—but still I expect that I want to give to one in particular because it is effective at spending for its cause. There are few causes where you do not have much choice of how to direct your money to affect it. An effective one will be competent at running itself—not perfect surely, but competent enough that I don’t think I will do a better job at allocating its funds than it will by giving a restricted donation.)
Also, many people’s gut feelings direct them to give restricted donations to avoid “wasting” their money; it’s a feel-good option but one that does not help the charity stay around in the long term. People who are more considered should compensate for that by allowing the charity to use their funds unrestricted. I have no idea if GiveWell gets grants or not, but grant support from foundations is often restricted as well; it’s much harder to get grants for general operating support. But I won’t start that rant here.
(For background, I’ve been heavily involved in nonprofits for the past 10 years, as volunteer, staff, and board.)
Yeah, I think that’s right. I’m the same as people who don’t want to give to charities who have too much “overhead,” leading to perverse incentives, as you say. GiveWell itself can be looked at as overhead for the charities it recommends, even though technically it’s a different organization. As such they deserve to be supported too.
Will click “Unrestricted” in the future.
There’s a big difference between trusting someone about a third party and trusting someone about themselves.
Maybe this would be a coherent position:
You trust GiveWell’s judgement on which charities are the best choices
You think they’ve done enough work to establish this, at least for the time being
You don’t plan to give more money in the immediate future
Therefore, you want your money go to to the charities, not to a decision-making process that you now see as having diminishing returns
I’m not sure I’d buy it myself… it seems like it really only makes sense if you don’t think anybody else is going to be giving money to GiveWell in the immediate future either (or perhaps ever?).
You could also just think that GiveWell doesn’t currently have as much room for more funding as the recommended charities do, even though GiveWell may disagree with that assessment.
Holden has written about donation restrictions on the GiveWell blog back in 2009 (bold and italics in original):
See also the following on the GiveWell blog:
The comments by GiveWell analysts Alexander Berger and Timothy Telleen-Lawton on giving to GiveWell in this recent post.
Holden’s post on GiveWell’s funding needs in 2013 where he asked more donors to donate unrestricted to GiveWell. (For their latest budget and revenue forecasts, see this document from September.)
I think the nasty part of the Hard Problem of Consciousness is probably in finding a naturalistic explanation for how things come seem subjectively objective: for why the wavelength of red feels from the inside like a built-in quality of the world rather than a perception generated by a mind in response to a stimulus. I think the “social processing theory of consciousness” doesn’t quite explain this, at least not to my satisfaction.
Of course, the random thoughts I record in Open Thread are not liable to be high-quality.
For any agent, self-reflection has to bottom out somewhere, since working memory and cognitive capacity are finite.
That said, some meditation practitioners report being able to notice “the arising and passing away” of individual sensations, so it may be that this is just a matter of training rather than an essential feature of consciousness.
That insight would be an additional layer of processing requiring energy that could be used for other things. How would such an insight increase the brain’s reproductive success? We don’t have insight to our outputs either, you don’t intuitively understand how your muscles work in response to stimuli, which is part of the reason why it feels like we have free will and are agents in the world instead of parts of the world.
The true quality of stimuli is already lost and grossly simplified at receptor level, in the case of light the photoreceptive rod and cone cells of the retina.
Thinking about a quote from HPMOR (the podcast is quite good, if anyone was interested):
...
Besides the quoted “Chimpanzee Politics” are there any other references to this hypothesis? I’ve tried Googling around for 5 minutes and I couldn’t find anything.
Edit: seems like I was looking using the wrong keywords: Wikipedia seems to have a small paragraph on evolution of human brain due to competitive social behavior, but I’d still like to see if anyone else had any articles on the matter.
I believe one phrase for it is the Machiavellian Intelligence Hypothesis
The less dramatic name for it is the Social Brain Hypothesis. It was originally proposed by R. A. Dunbar (of Dunbar’s number fame).
Not exactly a match, but Others in Mind: Social Origins of Self-Consciousness makes a variant of this claim, putting an arms race concerning prediction of third-party reactions to hypothetical actions as the driving force of human self-awareness.
I suggest Geoffrey Miller’s book The Mating Mind. Or search for sexual selection.
Yet many of the really smart humans seem to have trouble with social interactions. Dark Triad people who excel at getting others to do their bidding might have higher than average IQ’s, but they generally don’t go into STEM fields where having high IQ’s can pay off.
If you’re smart and “excel at getting others to do [your] bidding”, you don’t want to go into STEM, you want to go into management.
STEM pays you far less for being smart and manipulative than say, being a good lawyer or salesman.
Is there some resource (for instance, in the LW sequences) where I can redirect people to get a quick, clean view into the Bayesian worldview of doing things, especially in science? When I read people say things like “Consensus doesn’t matter in science!” I want to respond with “Well, consensus isn’t everything, but being informed about the agreement in opinion of a large number of authorities in a subject should make you update your beliefs” but I find it hard to do that without then having to explain what “update your beliefs” actually means.
People don’t change their views about fundamental issues such as how they deal with knowledge quickly.
As far as articles go, Yvain’s article http://slatestarcodex.com/2013/08/06/on-first-looking-into-chapmans-pop-bayesianism/ is good.
“Change your mind”? “Take that on board”?
I think “change your mind” is too specific; in common parlance it’s used only for major changes, such as going from thinking “probably X” to “probably not-X”.
“Consider the balance of evidence”?
One the hardest things for people to do is change the way they believe the world is structured because psychologically there lies an aspect of death or non-existence. It is very hard for most people to do.
I note that in my profile I can see posts that I have up or downvoted under “Liked” and “Disliked.” Is there a way to get a similar list of comments that I’ve up and downvoted?
There’s lots of material here in comment threads that I think is worth remembering but is tricky to re-find.
When turning in their final papers by email, a few of my students added positive comments about my class such as “Thanks for a great semester, I really enjoyed this class”. All of my students will do an anonymous (to me) evaluation of my class. I imagine that both of these occurrences are commonplace among college students. It would be interesting to see if students who complement the class also give high anonymous evaluations, and this might tell us something about the honesty of praise. This would be an easy study for a college to conduct where a large group of professors tells a researcher which students complemented them, and then the researcher accesses the student evaluations, and then calculates correlations.
Norms of politeness will be a confounding variable.
This is an interesting idea, but it may be very difficult to do in practice. At many major universities the students’ evaluations are fully anonymous-their names are never on them at all. And I suspect that the universities may be uncomfortable changing that even for a few classes just to help answer this question.
I’m considering trying out a paleo diet. I’m not totally convinced by all the arguments for it, but are there arguments against paleo that say it is actually bad for you? By “bad” I mean are there worthwhile arguments that switching to paleo is worse for your weight and longevity than your typical American diet?
As far as long-term consequences go, my ability to digest carbs without getting gassy or stomach achey seems to have been seriously damaged by my diet.
I’m afraid that it largely depends on your initial diet. For me, paleo was a net disaster (3 months with quality of life substantially reduced, and 3 kgs gained), but probably because I came from an already healthier diet.
If you factor in that in a high proteic diet you have higher than normal insuline spikes and you may lose the ability to process gluten, then you might actually gain some benefit.
In my admittedly small social circle, I’ve seen though more people benifit from a vegetarian diet than from paleo.
Agreed. Paleo is vastly superior to SAD (standard American diet), but it’s an open scientific question whether paleo is better than other common diets. (I’m paleo.)
If you do low carbohydrate paleo there can be a difficult transition period (see Atkins Flu).
Is this because there’s no reason to think that it should be better or because there are contrasting data?
Because of the limits of nutritional science where they can’t run long randomized experiments on people. The theoretical case for paleo is excellent.
It might be true if the so-called paleo diets resembled actual old diets. But they don’t. Even in paleolithic times human diets varied by region. And many of the crops included in paleo diets didn’t exist in the cultivars and forms they exist in now. See for example this talk.
It’s not a binary thing. Eating a diet consisting mostly of grass-fed meat, seafood, and vegetables is a lot closer to what our ancestors ate than the standard American diet is.
As James_Miller said, it’s not a binary thing. Paleo-style diets are closer—not identical—to actual paleolithic diets (compared, say, to SAD).
The point of paleo isn’t really to pretend to be a caveman though reading some promotional materials will give you that idea. The point is to get rid of food which the humans have encountered only recently in evolutionary terms.
Paleo certainly doesn’t go all the way (for one thing you’d have to exclude pretty much all cultivated fruits to start with), but makes a step, and it’s debatable how big of a step, in that direction.
Could you provide some solid evidence? I’ve never found something that didn’t crumble at the first investigation.
I can’t back up any of this with solid citations but: If our ancestors have been eating a food for a very long time that’s Bayesian evidence that the food is safe. We have been eating meat for so long that it seems likely parts of us are dependent on stuff we can get only from meat. Cancer, heart disease, and strokes seem to be mostly diseases of civilization that were relatively rare among hunter-gatherers who ate their traditional diets. Things go really badly for hunter-gatherers who switch from their traditional diets to modern diets. Wheat is cheap to grow so even if it is unhealthy it’s understandable that it would be widely consumed. It’s also understandable that sugar, being a superstimulus would be widely consumed even if it is unhealthy. Lots of people who try paleo succeed in loosing weight. The modern obesity epidemic shows something is very wrong with SAD (Standard American Diet) and paleo offers a tried and true safe harbor.
Cancer and heart disease are diseases of longevity. Why expect paleo to help with them when there’s every reason to believe longevity wasn’t a part of that environment?
I don’t have data at hand, but I think that’s true only partially. Yes, the prevalence of cancer and CVD is a function of the age of the population, but as far as I remember, even after you control for age, they still show up as diseases of civilization with the “primitive” societies having considerably lesser age-adjusted rates.
At least one causal pathway for that is visible: diabetes and the metabolic syndrome in general are clearly diseases of civilization and they are strong risk factors for CVD (I don’t know about cancer).
Interesting—I’ve modeled all cancer in my mind as vaguely similar to testicular cancer—one is likely to get it, but unlikely to die of it unless you survive many other potential causes of death.
In other words, I’m not sure if the data we care about is prevalence-of-cancer or prevalence-of-cancer-deaths.
On reflection, I think the assertion under question is essentially “Paleo diet creates more QUALYs.” Which should be answered in part by how much prevalence of cancer effects quality of life even if the cancer was not a causal factor in death.
Cancer is really cancers—it’s a class of diseases which are pretty diverse. Some are slow and rarely actually kill people (e.g. prostate cancer), some are fast and highly lethal.
I think we care about prevalence of cancer (morbidity) because the prevalence of cancer deaths (mortality) heavily depends on the progress in medicine and availability of medical services.
My impression is that the answer is “a lot”.
While I don’t have the stats, I think that 50,000 years ago if you lived to 30, you had a reasonable chance of living to 70, and cancer and heart disease kill lots of people under 70.
“Longevity Among Hunter-Gatherers: A Cross-Cultural Examination”, Gurven & Kaplan 2007; might be helpful.
Good article. Quotes:
When I did my paleo experiment, I wasn’t really eating any more protein than before. My carbs were way down though, and my fat intake way up. I wonder if this may explain some of the difference in our outcomes.
Huh? Gluten sensitivity is an autoimmune problem, you don’t acquire it by not eating wheat.
People who have done paleo can sometimes have issues processes carbs because you lose the stomach bacteria necessary to do so. It was poorly phrased. The transition back can be hard.
Even then, it’s not like you “lose the ability”—the gut microbiota changes fairly rapidly so you should be fine in a few days. But I agree that transitions between very different restrictive diets can be hard on the body.
This runs contrary to what I’ve ready, but not specifically on gluten-related microbiota. Apparently change in the overall ecology of gut microbes can be very difficult to recover from.
Here is some data on the ease of changing one’s microbiota. However you have a valid point in that the persistence of particular kinds of microbiota is not well understood and evidently it’s possible to fall into, um, local minima that are hard to get out of (thus the whole fecal transplant business).
I suspect the reality here is much more complicated than the simple “easy to change”/”hard to change” approach.
It’s more complicated than that. Small intestine microbiota might have a role in the genesis of the celiac disease.
Link? There are theories that infection by certain viruses can trigger celiac in genetically susceptible people, but infection doesn’t usually count as microbiota.
I’ve found this: http://www.ncbi.nlm.nih.gov/pubmed/25483329
Well, it’s behind the paywall, but even the abstract is pretty clear that there are no results (aka no evidence). The paper seems to want to “discuss future research directions”.
It’s a review paper. Of course it doesn’t present its own experimental results. It says it presents data (from other papers, no doubt) correlating CD with small intestine microbia, though this data is not sufficient to show causation.
What I read is: it’s possible that microbiota alteration have a role in CD, but studies that focus on that link are missing, so we should investigate more. Your sentence seems to imply that you read in the article the exact opposites, that studies were made but didn’t find any link. I know that’s not what you stated, I just want to be clear on the presuppositions.
As you said: there is no evidence, but there might be if it was investigated. So a role of microbiota is possible and at least not fantastically improbable. That’s why I used “might” five comments above.
I said “no results (aka no evidence)” which implies no data pointing one way or another. If I had said “negative results” that would have implied that there is evidence disproving the hypothesis.
LOL. There is a HUGE space of hypotheses for which there is no evidence but which are “not fantastically improbable”. Oh, and there’s a fellow with a razor here, he wants to talk to you… :-)
I like Paleo with a mix of fast and slow moving carbs. I eat a primarily protein diet and then mix in small amounts of different kinds of carbs and a little non-cow dairy.
MIRI was mentioned in today’s econtalk podcast on AI. Just in case anyone is interested.
Link to the podcast, with transcript.
The mention of MIRI, about (bad) AI forecasts :
(If people have always been saying it’s 20 years away, then the median prediction wouldn’t be 20 years from today.)
Um...no, that’s exactly what it means. Today is a subset of always.
What’s going on here is ambiguity between median date and median interval. (And I’m fairly sure Gary Marcus is talking about the median interval rather than the median date.)
Yes. If the median interval is 20 years, then the median date is not 20 years from today (modulo weird hypothetical data sets which I don’t think we actually have).
I think Gary’s aware that the median interval has always been 20 years. But when he says this:
It sounds like he’s saying that the median date is 20 years from today. I guess another interpretation would be something like “the median interval is 20 years, and you might think that that’s because in the 1950s they were saying maybe 40 years, and the interval’s been reducing, and now they’re saying maybe 5 years, and the median is 20 - but actually, when you do divide it up by year, you see that they’ve always been saying 20”. But that would be kind of forced.
This is just minor nitpicking, I think that he misspoke rather than misunderstood, hence the parentheses.
Actually, I think your “other interpretation” is almost certainly what he meant; it seems to me the most natural reading of his words. So I don’t think he even misspoke.
The Anti-Drug
I’ve seen that a lot of drugs seem to act like “gratification borrowers”: they take gratification/happiness from the future and spend it all on the present, sometimes extremely quickly, then leave you feeling miserable for a certain duration, the “low” or “hangover”.
I was wondering whether there was any drug that did the opposite, that functioned like delayed gratification: a drug that makes you feel utterly miserable at first, then eventually leaves you with a long-lasting feeling of satisfaction, accomplishment, and joy.
Does anyone here know of such a thing?
Reminds me of a Gregory Bateson quote:
Hm, a drug where the negative effects are first… Well, almost all medicines fit this category: unpleasant taste is common.
Illegal drugs that people want to take? Ayahuasca comes to mind:
Exercise.
But more seriously, try asking this again in the next open thread, this one seems flooded.
Actually exercise has been suggested to me as the alternative to drugs. “Spinning”, specifically. Addictive, very pleasurable, and makes you healthier (unless you overdo it, but sports are much more difficult to overdo than drugs, for some reason).
There are drugs for alcoholics that make you sick if you drink, so it makes you feel miserable short-run but may help you to be stable/functional/productive long run.
In my experience Salvia divinorum works very much like this.
Could you elaborate on any specifics? Apparently the plant is legal in most of the world and only prohibited in very few countries.
For me it produced a feeling I can best describe as a tactile analog of the sound of fingernails scraping on a blackboard. It’s not exactly pain but something similar and unpleasant. When it wore off I would feel noticeably happier for several hours. I didn’t repeat the experience many times, partly because of the unpleasant feeling, and partly because I didn’t find a good delivery method other than smoking. I used a concentrated extract strong enough that I could get the full effects from two inhalations, but once I’d done it enough to gain a basic understanding I didn’t consider further use worth the risk of smoking.
The main effect is strong hallucinations very distinct from those I got from magic mushrooms. Much less colorful, less detailed but more realistic imagery (similar to dream imagery), extremely strong tactile and proprioception distortion, little if any time perception distortion, weaker audio distortion, and completely overpowering all other sensory input at the peak. There was always an undercurrent of unease; unlike mushrooms which felt a very natural and appropriate mindstate for humans, Salvia had a alien and threatening feel to it. The peak only lasts about 2 minutes and the whole thing is over in about 15.
Thought: I think Pascal’s Mugging can’t harm boundedly rational agents. If an agent is bounded in its computing power, then what it ought to do is draw some bounded number of samples from its mixture model of possible worlds, and then evaluate the expected value of its actions in the sample rather than across the entire mixture. As the available computing power approaches infinity, the sample size approaches infinity, and the sample more closely resembles the true distribution, thus causing the expected utility calculation to approach the true expected utility across the infinite ensemble of possible worlds. But, as long as we employ a finite sample, the more-probable worlds are so overwhelmingly more likely to be sampled that the boundedly rational agent will never waste its finite computing power on Pascal’s Muggings: it will spend more computing power examining the possibility that it has spontaneously come into existence as a consequence of an Infinite Improbability Drive being ignited in its near vicinity than on true Muggings.
There are other ways of taking Pascal’s mugging into account. You shouldn’t do that based on lack of computing power. And if you aren’t doing it based on lack of computing power, why involve randomness at all? Why not work out what an agent would probably do after N samples, or something like that?
Well, it’s partially because sampling-based approximate inference algorithms are massively faster than real marginalization over large numbers of nuisance variables. It’s also because using sampling-based inference makes all the expectations behave correctly in the limit while still yielding boundedly approximately correct reasoning even when compute-power is very limited.
So we beat the Mugging while also being able to have an unbounded utility function, because even in the limit, Mugging-level absurd possible-worlds can only dominate our decision-making an overwhelmingly tiny fraction of the time (when the sample size is more than the multiplicative inverse of their probability, which basically never happens in reality).
Importance sampling wouldn’t have you ignore Pascal’s Muggings, though. At its most basic, ‘sampling’ is just a way of probabilistically computing an integral.
Well, they shouldn’t be ignored, as long as they have some finite probability. The idea is that by sampling (importance or otherwise), we almost never give in to it, we always spend our finite computing power on strictly more probable scenarios, even though the Mugging (by definition) would dominate our expected-utility calculation in the case of a completed infinity.
It seems to me that Islamist terrorists are trying to maximize defection from the larger society, and they’re even able to recruit Kurds. Admittedly, they’re only getting a tiny proportion of people, but why are they getting anyone at all?
Would anyone care to take a crack at whether there are conditions under which this makes sense in terms of game theory?
I think emr, above, makes some very good points, but I think you guys are all missing some crucial aspects of the situation.
The places where a distinctively Islamic terrorism has taken off (Algeria, Chechnya, Somalia, Afghanistan, Iraq, Syria) are all areas that have been ravaged by civil war or foreign occupation, leading to the breakdown of co-operative mechanisms in the wider society. In other words, the only move is defect. Yet at the same time, these societies (or sections of them) retain a distinctively Muslim identity and aspiration, so the natural way of forming new, co-operative institutions is to base them on that shared Muslim identity. Those participating in these movements no doubt see themselves as conducting a new Abbasid Revolution. This also draws in sympathisers from outside the country. Yet the outcome often becomes terrorism, because:
There are non-Islamic, or non-Sunni counter-currents within that society
Islamic regimes arouse massive hostility in the West
Precisely because these movements arise in the context of existing civil war/violence, it leads to a Hayekian “Worst Get On Top” dynamic, where more moderate groups get forced out.
In other words, suppose you’re a Sunni in Eastern Syria or Central/Northern Iraq, and you want to co-operate to protect your friends and family from, say, being kidnapped and tortured by the police. You tried voting for al-Iraqiyya, and indeed they won the election, but half their candidates got thrown out of Parliament, so they can’t legally stop the state machinery from persecuting you. And those guys aren’t gangsters, so they can’t use extra-legal means to protect you. So maybe the co-operative move, at least in a lesser-of-two-evils sense, is to join ISIS. And hey, maybe that worked, because ISIS’s success caused Maliki’s government to collapse, and maybe the new government in Baghdad will govern in a non-sectarian way, at least for a while, in the same way that al-Qaeda-in-Iraq’s initial successes gained concessions and allowed the ‘Sunni Awakening.’
As for why these organisations are able to recruit worldwide—I don’t know why you consider it so surprising that they should get “anyone at all.” It’s like medieval Christians signing up for the Teutonic Knights, or Georgian Hellenophiles going to fight for Greek independence. These people are ignorant and idiotic, with a notion of what they’re doing that’s utterly divorced from reality, but there’s never been a shortage of romantic fools.
That may be enough to explain it, but I do think groups that compete in committing the worst public atrocities are somewhat unusual—Hitler and Stalin made some effort to conceal the worst of what they were doing.
Thanks for the Hayek link. Do you recommend the savageleft site?
I don’t know anything about that site. They just happened to have a freely available copy of that chapter.
If you look at pre-20th century (and especially pre-Enlightenment) history, you see that it is Hitler and Stalin that are unusual.
I think the linked article hits a few common themes about why this might happen:
Sunni Islamist groups manage to convince some Sunni Kurds that the “Sunni” part overrides the “Kurd” part, at least while there’s a good opportunity to gang up on a more-hated outgroup.
Broadly, an Islamist group will claim that they represent the larger and true community of cooperators, and so defection is presented as the true cooperative move.
The defining feature of culturally foreign recruits has been low-status, while only a weak Islamic heritage seems to be required. It’s possible that literally no mainstream group wants some of these people, while a terrorist group will promise them status. Radical groups are often unwittingly assisted in this by a foreign media which dramatically exaggerates the seriousness of these groups. (This is connected to the oft-cited effect that media coverage has on encouraging school shootings).
The community is unable or unwilling to punish defection. Bluntly, most recruits come from communities where they can expect at least ambivalence, if not some support, for actions we would describe as terrorism. Think of Saudi Arabia. Given the official ideologies, one has to resort to game theory (signaling) to explain why there aren’t more recruits, and to things like Reason as a memetic immune disorder to explain who acts and who doesn’t.
The few recruits who can genuinely be described as defecting from the entire local community are more puzzling. My best guess still involves low status, the law or large numbers, and (at least sometimes) a pathological inability of the host society to respond to defectors who don’t defect in culturally familiar ways, as in the recent Australian terrorist attack.
I rather clearly shouldn’t have included the link about the Kurds—it’s a distraction from the question I’m more interested in, which is why organizations like Al Qaeda and ISIS exist and are able to recruit worldwide.
Unless I’ve missed something, most eras don’t have anything comparable.
Well, to be clear, ISIS-actually-in-Iraq-and-Sryia seems quite different from modern Al-Qaeda and self-declared-ISIS-affliate-abroad.
The Spanish Civil War might be the closest match for the foreign support for ISIS-actually-in-Iraq-and-Syria. In addition to direct foreign involvement as a proxy war, it attracted significant foreign volunteers from foreign countries whose government officially opposed their citizens going to fight (like Ireland), and whose identity partially overlapped with a particular faction, (Catholics, socialists, separatists broadly defined, etc).
Although the bulk of the recruits for ISIS have come from neighboring regions, that so many Europeans have joined ISIS-actually-in-Iraq-and-Syria might just be the result of a larger population of recent immigrants who maintain some shared identity with those involved, combined with easier travel and communication.
Modern Al-Qaeda and ISIS-affliate-abroad franchise schemes do seem more unique. You certainly could not have had comparable groups without modern communication and travel. In general, you have things like authentic branding, access to funding (Gulf States), access to expert advice, possibly better scaling, risk pooling, and a sort of meritocratic weeding-out of worse strategies that incentivize geographically separate groups to affiliate rather than work independently, in fairly direct analogy to commercial franchises, along with more local interests that favor independence. I don’t know why this particular set of terminal values has geographically widespread support in the first place though.
It’s all about motivation. One of the biggest driver’s to join a terrorist group is a sense of powerlessness over a situation or being gullible to wider teachings. I recommend reading, “Learning to Eat Soup With A Knife.” Not only does it have great bits about counter-insurgency strategy it gives you a picture of the people involved.
Because people are different and in a sufficiently large population you can find outliers (don’t forget people are not rational).
AI researcher with good academic credentials is wanted. If you are interested, send me a private message, please.
I was unsure where to post this question; let me know whether it merits its own thread. I also apologize if this post is a bit messy.
If I had to title this post, I might name it, “Optimizing College Activities for a Future Programmer”.
I’m a college student at an American school. It’s quite a lot of work—more than I can do in the time given, and I have a study routine that’s more efficient than a lot of people I know. I was handling it relatively well last year and still getting enough sleep, exercise, socializing, etc. -- basically all the things I would consider essential for keeping me sane.
I do not do drugs. I do not watch television or movies. I am vegetarian. My room is not decorated and I do not buy expensive items. My socializing thus far has consisted of talking with people over meals while walking around campus. I am not in a relationship. I spend most of my time studying and doing school assignments. I have a relatively good GPA and have worked hard to maintain it. But the work is getting harder and I’m thinking I’ll need to start putting less work into my classes and accept a lower GPA, because I cannot compromise the essentials (meditation, sleep, etc.). It’s been too stressful to do as much coursework as I’ve been doing and to skip the essentials.
I plan to pursue a career in software engineering / outside academia. I’m double-majoring in math and CS. I do not plan to get a master’s degree or a PhD (at least, not any time soon). I understand that CS students’ grades don’t matter much, though I do think I can benefit from doing as well as I can in my classes. (But I’m also willing to work less in college to be happy.) I also have a great coding job that I’ve been neglecting because of my studies, but I don’t want to neglect it any longer.
Some questions:
Should I let my grades drop a bit and instead work my coding job and ensure I’m doing the “life essentials” on a daily basis? I will be replacing some of my academic work with programming, which is in my estimation pretty valuable. I’m on a scholarship and it requires that I maintain at least a 2.0 GPA, but I’m quite confident my GPA isn’t in danger of dropping that low.
Should I put more emphasis on socializing and forming a network? I don’t use Facebook much, and my interests don’t intersect with most of my peers’ (see the partying / drinking / buying expensive food stuff above). I’d rather spend time with people who are doing interesting things and who I can relate more with (is this bad?), but I’m having trouble finding such people on my campus. How do other “rationalists” form social networks in these kinds of environments, or do they?.. I don’t want to miss out on something essential (like developing social skills and / or a network) if it is actually essential. (To be fair, I am a bit awkward and often find I don’t have anything to say to my peers, but I think this is again because my interests differ from others’… But maybe I’m wrong.)
How should one pursue the development of social skills? How much time should one put into it, vs. into coding, studying, etc.? Based on what I’ve read, being friendly and someone people can get along with and want to spend time with can work wonders in all sorts of circumstances. Relative to my peers, I’ve put less time into meeting and hanging out with people, and I think I’m less socially adept. I’d love to improve. Any suggestions?
On a related note, how can I find people with whom I’m compatible? I frequently run into people I don’t want to spend time with, but rarely do I meet people whose presence electrifies me.
Any other general advice? e.g., I haven’t read anything outside of class since the summer, and I’m thinking it would be good to read during the semester.
I have two pieces of advice for you. Please take them with a grain of salt—this is merely my opinion and I am by no means an expert in the matter. Note that I can’t really recommend that you do things one way or another, but I thought I would bring up some points that could be salient.
1) When thinking about the coding job, don’t put a lot of emphasis on the monetary component unless you seriously need the money. You are probably earning less than you would be in a full time job, and your time is really valuable at the moment. On the other hand, if you need the money immediately or are interested in the job primarily because of networking opportunities or career advancement, then it is a different matter.
2) Keeping up a good GPA is not equivalent to learning the material well. There are certainly corners you could cut which would reduce the amount of work you need to do without losing much of the educational benefit. As the saying goes, 20% of the effort gives 80% of the results. If you are pressed for time, you may need to accept that some of your work will have to be “good enough” and not your personal best. Having said that, be very careful here, cause this is also an easy way to undermine yourself. “This isn’t really the important stuff” is a fully general excuse.
Thanks for the response.
Re: 1) I’m not as focused on the money as on the programming opportunities it might later lead to.
Re: 2) I agree with everything here. What do you mean in your last sentence?
I’m having trouble finding the original sequence post that mentions it, but a “fully general excuse” refers to an excuse that can be applied to anything, independently of the truth value of the thing. In this case, what I mean is that “this isn’t really the important stuff” can sound reasonable even when applied to the stuff that actually is important (especially if you don’t think about it too long). It follows that if you accept that as a valid excuse but don’t keep an eye on your behavior, you may find yourself labeling whatever you don’t want to do at the moment as “not really important”—which leads to important work not getting done.
The post is “Knowing About Biases Can Hurt People”. See also the wiki page on fully general counterarguments.
Thank you! That is exactly what I was looking for.
My advice would be to get just a minor in Math so you can get some easy electives in. if you’re considering a career change out of computer science as a possibility and just creating options then there are very few grad schools that would accept a math major but not a math minor. In the job market, a CS plus math minor is probably not going to be a big difference from a double major. This may not be the case for all possibilities but it should hold for most of them. Certainly most employers are going to compliment you then shrug for programming jobs.
A high GPA is somewhat helpful when you’re looking for your first full-time programming job, but probably not as helpful as how prestigious a school you went to, and definitely not as helpful as industry experience or actual nuts-and-bolts engineering skill. This latter is not to be underestimated; a lot of new CS grads can’t quickly write accurate pseudocode, for example, and you will be asked to do that at some point during any half-decent interview. If you can’t do it, that’s going to be a deal-breaker, but having a 3.2 instead of a 3.5 GPA almost certainly won’t be.
After a couple years of full-time experience, almost no one will ask about your GPA (they will, however, ask where you went to school and what degree you got), meaning that GPA is important to your long-term career prospects almost entirely insofar as it affects your standing right out of college. It’ll also affect your prospects for grad school, if you’re interested in going that direction at some point.
I can’t stress this strongly enough. I’m an engineer at Google, and do a lot of interviews here.
The hiring decision is based on interview performance, estimates of where you should be—based on history, age, etc.—and such things. It is not at all based on GPA. GPAs are only important inasmuch as it might get you past the first bar of getting to an interview at all, something you can automatically bypass by having someone who already works for Google recommend you, as well as a couple other ways. Personally, I was headhunted over IRC; my GPAs never came up at all, and they didn’t ask for a copy of my transcript until after I’d already signed a hiring contract.
This is, of course, not true for all companies—but I do think it’s true for the better ones. In short, actual skill is what matters.
Before you do anything else, reconsider your class schedule. A higher GPA will probably mean more to your future career prospects than keeping the math major. Also, balance your schedule so you have a mix of lower-work/more-gently-graded classes and harder classes every semester.
Consider dropping the job depending on the criteria ChaosMote noted.
Your lifestyle sounds pretty sterile. You should make an effort to socialize more. College can be a place you make friends who last a lifetime, if you put some effort in.
What leads you to think this?
What kind of effort?
One major is enough; many companies look at GPA and a low GPA can rule you out but no second major won’t. The GPA/more classes pareto curve is also usually more favorable towards one major. But if it’s a very small commitment for OP my advice doesn’t stand.
Talk to people, be friendly, stay in touch, initiate social activities, be a good friend.
I think you’re missing,
All your suggestion are good, but it’s not worthwhile to put lots of energy in building friendships with people that you don’t like.
I wound up not liking anyone I met at the University of Tulsa. But then I had to transfer there after three semesters at a real university—Washington University in St. Louis. Imagine going from just below Ivy League to a place which, according to a Wikipedia page, has only one notable alum with a STEM degree from there, compared to dozens of alumni in areas like entertainment, business and sports:
http://en.wikipedia.org/wiki/List_of_University_of_Tulsa_people
Versus:
http://en.wikipedia.org/wiki/List_of_Washington_University_alumni
Could you join a fraternity? Best decision I made while in college.
I’m unsure this is the right decision for me, given that I don’t particularly enjoy partying or drinking. Why did you join a frat? What did you get out of it?
Also the German name suggests European location which means that fraternities are pretty much dead around here.
The rent is extremely low, and the connections that I made there were valuable. I got my first job from a brother.
On this note: did your social skills improve after joining?
… And, more generally, how should one pursue the development of social skills? How much time should one put into it, vs. into coding, studying, etc.? Based on what I’ve read, being friendly and someone people can get along with and want to spend time with can work wonders in all sorts of circumstances. Relative to my peers, I’ve put less time into meeting and hanging out with people, and I think I’m less socially adept.
This is something I’d love to get better at.* Any suggestions?
*That, and finding people with whom I’m compatible. I frequently run into people I don’t want to spend time with, but rarely do I meet people whose presence electrifies me.
It’s literally practice. Practice basic social skills like telling stories, listening, and relating, and practice getting into a social state.
I don’t know. I mean, how can you measure your own social skills? You might think you have a bunch of friends, but what if they are just laughing behind your back? No way to know. I don’t think my social skills particularly improved during my time at Psi U, but they were fine going in.
I’ve got nothing on how one should pursue the development of social skills. Maybe make it a practice to meet a new peer group every couple of months? Part time jobs are great for this. I don’t know, seems like there’s got to be information on how to be a people person out there, self help books and such. It feels like a common problem.
If he thinks that his workload will ramp up, now isn’t a great time to join, because he’ll have to spend a semester going through the pledge process.
I do highly recommend fraternities otherwise though—they’re a great network and place to meet friends, a good place to practice and develop social skills when interacting with the rest of the Greek system, have a lot of people focused on self-improvement, and they often take care of a lot of the overhead in college (by providing meals, laundry machines, a gym).
This being a good decision is of course contingent upon you actually liking the people in the fraternity as well as the particular attitude of the house in general. And if you’re not into the party scene or nights out on the town, it may not be a good cultural fit.
How many hours is your coding job? If it’s 10+ and they’ll allow you to reduce them, you could go for that—you’ll still get to list having x months of experience there on your CV, will still get that experience and network from it, but will have a bit more time.
Don’t cut into the basic time you need for the essentials of life—being stressed or sleep deprived etc. will only make you less productive and exacerbate the problem. You also don’t mention any hobbies—do you get regular exercise? I know this means more time spent, but if you don’t get much already even 20mins every other day will make you more alert and productive.
There are easier courses and harder courses—try to take easier ones (still meeting requirements) as long as they don’t conflict with your interests. You can ask classmates about which are easier/have better lecturers.
Prep for your classes in the holidays—before each year I’ll look at the syllabus of the courses I’m taking and look up each of the terms there, spending maybe 30mins on each getting a general idea of what’s involved. This means you’re not seeing it for the first time in class, which makes it way easier to learn and retain (less overall effort).
Also ask why is the work harder now? I often find work hard for one of a few reasons: either I don’t have the background, I’m tuning out of the lectures (because the lecturer is boring, because I already know most of it, or because I feel like I don’t know enough to understand it even if I tune in), the work is actually time-consuming but not hard and this registers as ‘hard’ because I don’t want to start it, or I have just a few problems/knowledge gaps and don’t have the resources (friends, lecturers, example problems) to turn to to fix them. Each one of these has a different way of fixing it—for example the last one, having friends in that class helps with immensely, because you can each fill in those little gaps for eachother. I find it useful in math particularly.
Finally, there will be people at your college that also hate partying/drinking/etc. I’ve been lucky, having a solid group of 7 friends pretty much since I started college, all of whom aren’t interesting in drinking or partying, have similar majors (a lot in IT) and are happy to just hang out between classes and chat/study with eachother. I’m not entirely sure how you can find these people other then persistence—if you’re looking to go flatting, perhaps look at flats that say they are ‘quiet’, if you’re doing group projects try to group up with the harder working members of your class etc. and then follow up with this—ask where they hang out when they’re not in class and if you can join them. If you find one or two people with similar outlook to you, you’ll tend to find a whole bunch, because their friends will be similar to them.
In the same way that we have other periodic threads, how much of a good or a bad idea would it be to have a periodically posted thread where we could post our recent rationality failures?
I recently posted a discussion post on a similar topic, albeit for archival reasons, rather than regular community support. Either way, I’d encourage you try it out. There seems to be a cautious attitude on LessWrong that if one asks about starting a regular or periodic type of thread, a consensus will come forth giving great reasons either for or against the thread’s institution. This doesn’t really seem to happen. If you start this thread, include for the first one a note like ’If this is used lots, or well-received, then we can make it a monthly thread”. If it gets lots of comments, or gets lots of upvotes, or even gets a few upvotes with a decent upvote:downvote ratio, I figure that’s grounds for posting the thread a second time. The group rationality diary encourages people to report mistakes and failure modes, but trying a thread explicitly and only about reporting failures and mistakes might work.
I wouldn’t mind a new monthly thread for that topic, but I think it’s (in theory) already covered by the Group Rationality Diary series, at least in the sense of trying to do something rationally and failing. There’s also the Mistakes repository for major life mistakes.
Predictionbook has been linked to and discussed here before. I’m one of the (few) active users, and I’m curious why more people who are regulars here don’t use it or don’t use it frequently. People who don’t use Predictionbook, why don’t you? Part of why I am curious is that if there are interface or similar issues then now might be a good time to speak up since Jayson Virissimo is working on a similar service here.
I’m not really sure how to answer this. Predictionbook is just one of thousands of websites I don’t use.
I use predition book from time to time.
A huge issue of why I’m not using it more is that it’s often hard to find interesting predictions to vote on. The Recent prediction page for example doesn’t allow you to show a second page.
It doesn’t seem to be possible to simply look at all the predictions on Tag X. I think the way stack-exchange works with tags functions quite well.
I think commenting should be separated from individual prediction making. There’s no reason why the website shouldn’t allow full threaded comments.
I used to use prediction book, but stopped because the interface was awful and the site was full of bad predictions that should have been private but weren’t (e.g., I’m going to pass my exam tomorrow—just like that, without any context).
If only publicly verifiable/falsifiable predictions were permitted, would you have stuck around?
I think the issue is not about allowing/disallowing such predictions but having good tagging that tags all those predictions with “private life” and then the option of people to block that tag the way you can block tags on stackexchange.
Yes, one of the most common issues people have brought up with PredictionBook is the lack of tagging in general which also makes it harder to search for predictions. I tried at one point to add comments to my predictions of the form [tag] but it didn’t work very well and no one else was doing it. If there were a formal way to do it with searchable categories that would be really helpful.
Rather than simply forbidding them, I’d suggest having various sharing options ranging from “public” over “people I invite” to “just me.”
If anyone is going to implement such a thing, I think it’d be important to have separate calibration curves for each class of publicity.
It has a “just me” mode that the authors of those questions should use, but don’t.
No; that’s too restrictive. It just needed more moderation and a lot more love.
I use it when linked to it on interesting questions, usually HPMOR. But the site is too thinly used and too poorly designed to have a really robust use for me.
Disclaimer: This application was designed for the sole purpose of helping me obtain employment in software development and is not yet production ready. Use at your own risk (of data loss)!
Still, having said that: let me know if you have any ideas.
To judge a prediction on your site at the moment I have to:
1) Click on the prediction in the list.
2) Click on the text field which contains the number
3) Switch to keyboard to type in the number and enter to affirm the prediction
4) Use the mouse again to get back to the prediction list
The second point could be saved by autofocusing on that field. You might also think of a way to provide buttons to allow people to enter data via mouse clicks.
It would also be nice to have a “next prediction”-button on prediction page to make it easier to enter a bunch of predictions.
I tried to create a prediction and the free text field was confusing. A calender view would be a lot better. When I typed in “355 days” and it gave me a prediction that supposed to be juged in 2 hours and no way to edit it.
I recently wrote a small (python, command-line) program for prediction tracking. Above all, I wanted something that allowed very quick entry, that stored everything in a single plain-text human-readable file on my own computer, and that I could easily customize.
As far as features go, I also wanted a tagging system and to be able to do more sophisticated analysis. For example, I wanted to classify predictions as e.g.”work” or “politics” when looking at accuracy, or to see how my calibration changed over different times or with how far in advance I had made a prediction.
Since the predictions that I care most about aren’t very interesting to other people, I don’t miss the social aspect. Nor do I want to obscure a prediction that contains personal details or store it separately.
An example:
If you wanted to predict that it will rain today, with 50% confidence, and tag the prediction with the “weather” and “external” tags, and be reminded to judge the prediction 2 days from now, you’d type this on the command line:
The created entry in the log will look something like this:
Running “predict—due” will either ask you about any predictions that are ready to be judged and update the corresponding log entry, or just change “state” to “due” so you can ctrl-f and edit the log directly. (Since the log is just text, you can manually edit any entry, like changing a prediction from “open” to “true” or “false”).
Running “predict—stats” will dump statistics.
I’m very happy with this work flow, and I’m hoping to clean up the code and share it when I get a moment. It’s all of ~250 lines, but I couldn’t find what I wanted out there already.
I don’t like design and usability of the site. I think topics should be somehow divided, because not everyone is interested in soccer or currencies which are the most trivial things to predict
I’ve occasionally used it, but I often want to make a prediction while I’m not near a computer and doing it on a smartphone is a bit of a pain. I also don’t have enough friends using it, so using it to prove I was right isn’t all that possible.
I used to use it, but as I shifted to doing more on my mobile device I found it was a pain to use and so stopped. If it had a good app then I would use it again.
It seems sort of pointless. Why do I need to know how accurate my predictions are? How would that benefit me?
Three reasons. 1) Other people can also see your predictions and see if your predictions are worth paying attention to. 2) It helps you become less overconfident. 3) If you keep mispredicting in some specific category then you realize you need to rethink your basic premises about that area.
Sure, but:
1) Who cares what other people think? Why should I help them if I have good predictions, or hurt them if I don’t? I don’t feel their joy or pain. I could see the necessity if I was receiving scorn, but I’m generally respected. Why try to adjust that? 2) Confidence makes happiness, right? I’m generally confident. Some part of that probably isn’t deserved. Why find out? 3) I don’t though, or if I do I don’t know about it. Why find out? I’d be distressed.
Well, sure if WalterL exists in a complete island and has no interest in getting other humans to pay attention to your predictions about the world then 1 doesn’t terribly matter. As for 2, this seems to be essentially an ignorance-is-bliss argument which if you are convinced of, I’m not sure why you are on LW at all. Moreover, it isn’t likely to be true: being overconfident can cause real harm- it makes one more likely to make decisions that one shouldn’t (for example: if you are overconfident in your investment ability, then you will actually lose money). 3) Seems to fall into a general value difference: services like PredictionBook are for people who want to know that they are actually modeling the world better. If you have to actively put blinders on to convince yourself you have a good model of the world, and you are ok with that, then there’s really not much to say. But then why are you even here?
Let me try and restate.
Instead of “why don’t you use predictionbook”, lets imagine that the question was “why don’t you use a chinup bar”. My answer is basically the same.
I’m not trying to improve what this tool improves, my arms/brain is strong enough for my purposes, and I’m not trying to go beyond those.
Trends at 2050?
How many linear or logarithmic trends can be forecast out as far as the year 2050? I once found some graphs of CPU speed and storage per dollar per year, but seem to have lost them; and now I’m curious what other trend-lines might be worth thinking about.
(I’ve been writing a story, and would be happy to make whatever such details I include as plausible as possible, while also acknowledging that relying on such trends is a mug’s game.)
https://intelligence.org/2014/05/12/exponential-and-non-exponential/ ?
Those sorts of graphs, yes; except I saw ones extending the trend lines out to 2050.
I’m not sure if it’ll keep going, but the Flynn effect(rising IQ over time) is mysterious enough that continuation is possible, and might be interesting.
Anyone here know about the International High IQ Society? I’m wondering if it would be worthwhile for me to try to get into it. Several of the free online IQ tests I’ve taken have put me at a few points below 120, so I probably wouldn’t be able to get into Mensa, but I might be able to squeeze out ~6-7 more points from practice in order to qualify for the IHIQS, which only requires a score of 124 (95th percentile).
I am not familiar with that organization. However, psychologist Carol Dweck, the author of the book “Mindset”, claims that it is better to praise children for their hard work instead of their innate intelligence, because this way one encourages them to develop “growth” mindset instead of “fixed” mindset, since the latter is less useful. On a similar note, I guess that organizations that are based on the “fixed” mindset are probably less useful than organizations or clubs that are based on some goal that could help you to develop “growth” mindset. Therefore it is probably better to join an organization that is working to achieve something and it is the result of your efforts that is measured, instead of unchanging abilities. Of course, being unfamiliar with International High IQ Society, I might be wrong about them. Perhaps instead of trying to find an organization that would help your personal growth you are simply trying to find a new place to meet new people and/or network. In that case I have very little idea whether this organization is useful for that goal, therefore I will not say anything about that.
That dichotomy looks funny to me. What about praising for results?
Carol Dweck’s assistants praised the children after they did well on a test.
So, it was not efforts or intelligence per se they were praised for. They were praised for having a reason to be good in that series of puzzles (of course, in this particular case, children were told what the “reason” was). Of course, it would probably be interesting to test whether praising children for results and results only, without any allusions to what might have been a reason for their success (leaving that to children themselves to deduce), is good or not.
Interesting. I am a bit suspicious of the results as my priors keep telling me that the effect looks too large for a single sentence (kids aren’t THAT easy to influence), but yes, I see what you mean.
Relevant TED talk.
Free online IQ tests mean nothing. Don’t take them more seriously than you would take a horoscope.
I don’t have an experience with the organization you linked, but if it is similar to Mensa, my advice would be: don’t worry about anything, just go there and take the test. If they are a serious organization, the test will probably be very different from the online tests you took. If you pass the test, there is a high probability you will be disappointed with the organization. Also, try the Mensa test.
My experience with Mensa is that it collects smart people (on the other hand, not that smart—if you had a decent university, most of your classmates would be on that level; also most of the LW meetup participants) but then it does… pretty much nothing. You will find there also people who technically have high IQ, but otherwise are boring and/or irrational. My advice would be to pick the few individuals who seem better than the rest of the crowd, and focus on those individuals. Mensa can give you contact to these few cool people, but that’s pretty much all it can do.
The IHIQS admission test actually is taken online for a fee of $5, in contrast to Mensa’s which is taken in a controlled environment for a fee of $40, so I don’t know how serious the IHIQS is. That’s one of the reasons I was hoping someone here could share their experiences with them.
Yeah, that’s kinda what I figured, but I thought I might as well see if the LW community knew anything about it.
I’m sure I’d fit right in then. :)
If networking with smart people is pretty much all Mensa’s good for, could I just as well stick to LW?
I’m thinking about trying that. I wonder, though, if I should wait and try to practice for it, as the test batteries can only be taken once each. I would be very interested to hear more about your experience taking Mensa tests, even if you can only share general information about the tests themselves.
How do Bostrom type simulation arguments normally handle nested simulations? If our world spins off simulation A and B, and B spins off C and D, then how do we assign the probabilities of finding ourselves in each of those? Also troubling to me is what happens if you have a world that simulates itself, or simulations A and B that simulate each other. Is there a good way to think about this?
A world simulating itself would be highly unlikely. It’s theoretically possible to have a universe simulate itself, since it can do things like just read from records instead of simulating the computer itself, but it’s not really feasible. You have to simulate a smaller or more coarse-grained universe than the one you’re in.
Possibly a stupid question, but wouldn’t a simulation of N human minds be feasible even if a simulation of a universe with N human minds is not?
I’m not sure where you’re going with this. We clearly have a universe, although it’s possible that it’s being simulated at lower detail than it appears. If you had a universe simulating itself, you’d have to simulate N minds, the computer, and the rest of the universe. The computer also simulates N minds, the computer, and the rest of the universe, so in order for it to work correctly, it needs to be simulated at the same detail as the N minds, the computer, and the rest of the universe combined. It’s one thing to simulate the computer to the same detail as everything else combined, but you have to do it including the computer. You’re simulating N minds an infinite number of times.
I thought that you meant “more course grained” according to the experience of the conscious entities in the simulation, not “more course grained” in the sense of including less total stuff (conscious and everything else) than an exact copy of the universe.
So a universe with a lone scientist and ample computational resources could afford to simulate the exact experience of the scientist, but couldn’t afford to simulate everything else at the same time. The confusing bit is that the scientist being simulated wouldn’t be able to tell if the simulation they were watching tick away actually corresponds to another conscious entity, or if the experience of observing a simulation tick away is just sensory data being piped in from the parent universe, in which case the scientist is...well, watching what exactly? Themselves?
Request for help: I’m looking for a study on scope insensitivity to use in one of my college entrance essays. If I recall it correctly, the study showed something like, when asked how much people would pay save one girl of a group of eight in a tribe from a disease, it was over double what people would pay to save all eight.
I checked the standard scope insensitivity post, and tried my google-fu, but can’t remember where I originally heard it.
The one you refer to was mentioned in http://lesswrong.com/lw/n9/the_intuitions_behind_utilitarianism/:
Link is dead, but the article is here http://foreignpolicy.com/2007/03/13/numbed-by-numbers/.
The original study seems to be Kogut and Ritov, 2005. I found a pdf through Google scholar search (always a good way to find studies) over here.
Thank you very much :-)
Every night as I’m lying in bed trying to fall asleep, I think of five of six things I want to remember habitually or in the short-run, so I get up and write them down. This costs me at least 25 minutes of sleep. I’m sure I’m not the only one with this problem; does anyone know good ways to store or record these ideas?
Keep lot of slips of paper on a clipboard next to the bed. Give each idea its own slip (so they can easily be sorted later). Lay down on the bed 25 minutes early, which will give you time for the thoughts to arise, then you’ll get your full sleep time.
I occasionally remember to keep pencil + paper by my bed for this reason, so that I can write such things down in the dark without having to get up or turn on a light. Even if the results aren’t legible in the usual sense, I’ve almost always been able to remember what they were about in the morning.
I usually try forming a picture to remember all the things, then focusing on it very hard. You can locate a picture in a specific location to help remember it (method of loci.) As soon as you get up, write everything down. (You can “locate” it in your closet or somewhere you go right after waking, and that may help you remember.) This may have the added benefit of improving your memory.
You could also try to find a time earlier that you’re not doing anything requiring thinking (like walking, or transportation, or eating, or whatever), and try to think of these things during that time, then write them down before you go to sleep.
I use my phone which I keep charged next to the bed.
Most modern phones can easily record several hours of audio, so someone could just start a recorder before they go to bed, not even have to touch the phone at night, and fast-forward through it in the morning to get any relevant clips.
I was actually thinking of just typing a brief note to remind myself without having to get up, but I guess that works too.
The self-indication assumption seems to violate some pretty basic probability. Suppose that there are two possible universes: universe A has one person, and universe B has 99 people. Suppose that the prior probability for each is 50%. Under SIA, you are 1% likely to be each one of these 100 possible people. But that means that universe B has 99% probability, even though we just assigned it 50% probability. It can’t change without updating on evidence, which we never did. What happened?
We updated on the fact that we exist. SSA does this a little too: specifically, the fact that you exist means that there is at least one observer. One way to look at it is that there is initially a constant number of souls that get used to fill in the observers of a universe. In this formulation, SIA is the result of the normal Bayesian update on the fact that soul-you woke up in a body.
Suppose there is a 2^(-n) chance of universe U_n with n people for n > 0. Initially, there’s nothing paradoxical about this. SIA converges. But look at the evidence you get from existing. Call that E.
P(U_n|E) = knP(U_n) for some k
P(U_n|E) = P(U_n&E)/P(E)
P(U_n|E) < P(U_n)/P(E)
P(E) < P(U_n)/P(U_n|E)
P(E) < P(U_n)/(knP(U_n))
P(E) < 1/kn
Since k is constant and this is true for all n, P(E) = 0
So, existence is infinitely unlikely? Or we must assume a priori that the universe definitely doesn’t have more than n people for some n? Or P(U_n&E) is somehow higher than P(U_n)?
Right. In a sense, P(E) is one over the number of possible people in the universe (scaled by how much of configuration space ‘you’ are).
Observing your existence only changes your probabilities if your nonexistence was also, causally, an option. In order for there to be an infinite number of possible people in the universe, but only this exponential prior distribution, the probability that any given chunk of stuff is ‘you’ has to go to zero.
This kinda confused me, I think because P(E) does not represent what I’d colloquially expect—I’m pretty sure now that it’s a sampling probability, not a global probability.
I don’t see why. If there’s an x% chance of a given chunk of stuff in universe U_1 being you, and there’s a 50% chance of universe U_1, then there should at least be an x/2% chance of a given chunk of stuff being you, right? And those calculations don’t actually use that it’s an exponential decrease. It could have been P(U_n) = 1/Σ(n) and it would still apply.
I’m not sure how you’re thinking of this, but think about what’s going on in jesscat’s “souls” toy picture. In order for U_n to be possible, there must be at least n souls. Since U_n exists for every n, there must be an infinite number of souls—and therefore zero chance that when we pick out exactly one soul for U_1, yours is the soul that gets chosen. Therefore, ‘your’ probability of existing in U_1 is 0.
This almost works like if each ‘soul’ was a configuration of a person, and you were one particular configuration—except U_n (and SIA) don’t specify that each person has to be unique. Instead, it’s more analogous to a particular configuration of a particular chunk of matter—that’s one way to put in uniqueness.
The vast majority of possible souls live in chaotic universes. Under this theory, rather than just having a random experience like a Boltzmann brain, you almost certainly have no experience. But having a sensible experience is still astronomically low.
Would it be possible for me to buy an actual printed version of HPMoR in some way? (Or at least, of the first X chapters of HPMoR, where X is at least a significant part of the story).
Thanks!
Fanfiction writers are generally prohibited from making money without the original author’s permission. This includes charging for prints.
If you just want a printed copy, you can order one printed from a PDF yourself (search terms: PDF bound book). It is likely to cost you upwards from $100 for the currently released chapters and come in 4-5 volumes.
If you want to thank Eliezer for writing the story, donate to MIRI the amount you find appropriate.
Is there a significant difference between the mathematical universe hypothesis and Hegelian absolute idealism. Both seem to claim the primacy of ideas over matter (mind in case of Hegel, and math in case of MUH), and conclude that matter should follow the law of ideas. MUH just makes one step forward, and says that if there are different kind of maths, there should be different kinds of universes, while Hegel haven’t claimed the same about different minds.
I’ve always thought of the MU hypothesis as a derivative of Plato’s theory of forms, expressed in a modern way.
Reduced consumption of animal products, more specifically meat should help my health and both my purse and the global poor through reduced food prices. For reducing meat consumption in general it seems easy to just replace meat in a lot of dishes with cheese or substitute meaty dishes with some scrambled eggs. What can I do for variety? I am especially looking for cheap, fast and/or convenient methods to put together a meal. I am very willing to trade off fast for the other two as I can listen to audiobooks or similar while preparing food.
Chickens produce more meat than eggs in a given time-period. I originally encountered this when reading about animal cruelty, but I would expect the same would apply to saving money and lowering the demand for food. If anything, you should be replacing eggs with meat.
And I read the opposite statistic in the same literature.
Interesting. Here’s my source. Do you still have yours?
I tried to check whether chicken or eggs is more expensive. I found something giving the average price of both, but it measured chicken by weight and eggs by the dozen, and it didn’t give the weight of the eggs. They seem to be similar though.
First of all, I think that chart contains a programming error. Surely column 4 is supposed to be column 2 divided by column 3? That would be 23.9 for eggs, not 27.7. That’s still 4% higher for eggs than for chicken, but that’s a small number (nor is 20% a big number).
Second, kilograms of food is a silly metric. Maybe it’s OK as a first pass when comparing meat to meat, but eggs and milk are not meat (and if you care about a 20% difference, a first pass is not sufficient). We should be measuring calories or perhaps protein. I believe that chicken loses half its weight in cooking, while eggs do not, so if we substituted cooked weight for raw weight, that would dramatically favor eggs (but still be silly).
I tried to convert weight to calories using one site for both chicken and egg. It claims 1 calorie per gram of raw chicken breast and 2 C/g for cooked breast and 2.5 for cooked thigh, confirming 50% water loss. It claims 1.5 C/g for egg, both via the 100g option and the large egg option, which many sources (including yours) claim has a 50g weight uncooked. If it were all breast meat, the eggs would have 50% more calories per raw weight. If it were all thigh, it would only be 20% more. I think that there is some bone involved, which I assume cancels with the thigh, so I’ll stick with the 50% number.
I’m not going to dig up my source, but I think it claimed equal protein per day for chicken or eggs. This analysis using your source suggests less for eggs.
I ran through the whole equation and I got 63.2, which is worse than what the table has. It’s 37% worse for eggs. I am surprised by the discrepancy though. The guy who wrote that is on LessWrong, so I’ll PM him about it.
It might not be enough to conclude that eggs are worse than meat, but it’s enough to show that substituting eggs for meat isn’t going to do a whole lot to help.
Using those numbers, along with recalculating the rows on the table, I got chicken as being 10% worse.
What “whole equation”? Meaning taking into account the claims about the relative suffering of eggs and chicken? It seems like it should be possible for people to agree on the relative suffering of different chickens, but after looking at the other numbers, I am utterly uninterested in what Brian has to say about it.
Sure, but you are moving the goalposts. You should have brought that up at the very beginning, rather than claiming that a 20% difference was cause for action.
Meaning taking into account the claims about the relative suffering of egg-laying chickens and meat chickens.
What I originally said was:
I was saying that replacing meat with eggs would not work, and doing the reverse would be slightly helpful. Apparently, replacing meat with eggs would be slightly helpful, but it still won’t do much.
My goal was to keep Metus from wasting effort on doing something pointless. I most likely succeeded. My goal of the continued conversation was to correct any misconceptions I had, to which I also seem to have succeeded.
Actually, about 30% unless you like your chicken extra extra dry.
Thanks for the second source, but that doesn’t convince me that it’s “actually” correct.
How about this, will that convince you?
Probably not. It’s important not just to have authoritative sources, but also to explain the existence of false sources. Your failure to even notice that I had a source is disturbing. It makes me skeptical of your reading comprehension and unlikely to read sources you suggest.
Probably the cause of the discrepancy is the injection of water. I don’t know how that interacts with the other sources of data.
Moreover, I don’t care. I thought that would be obvious from the structure of my calculation.
There’s a lot of false information on all subjects on the internet.
It’s unusual to find on LW people who refuse to update on multiple pieces of evidence without contesting them, but hey, it’s your mind and you, as you said, don’t care.
Speaking of reading comprehension, your linked source says that there are 55 calories in 1 oz of boneless chicken breast, cooked, and that 1 oz of raw boneless chicken breast yields 35 calories. Now it seems to me that 35 is not one half of 55, even if you squint really hard.
Stirfries can be cheap, fast, convenient, and healthy. Dahl (lentil stew) is slow to make, but cheap and convenient.
I’m a big fan of those. Basically stir fry and ayurvedic diets are fast, easy and very healthy.
To coin a phrase, “What has government done to our money?”
https://www.google.com/webhp?sourceid=chrome-instant&rlz=1C1TSNP_enUS504US504&ion=1&espv=2&ie=UTF-8#q=oil%20price%20collapse
Because this seems to conflict with the “Austrian” doomsaying that the U.S. dollar would “collapse.” Now it looks like the U.S. dollar has gone into the opposite of a “collapse” because a dollar can buy a lot more oil today than it could a few months back.
Austrian economics has consistently made bad predictions. It doesn’t give you the right answers for rates, fx, gdp or inflation.
As opposed to what kind of economics? :-/
http://krugman.blogs.nytimes.com/2011/10/09/is-lmentary/
Yes, Krugman correctly predicted that the post-2008 flood of money will not lead to quick inflation. That’s the example that’s I’ve seen literally dozens of times as the “proof” that Krugman is right and everyone else is wrong.
Can I see any other pieces of evidence?
The effects of fiscal austerity vs deficit spending:
http://www.nytimes.com/2014/02/21/opinion/krugman-the-stimulus-tragedy.html?_r=0
That’s not evidence, that’s Krugman patting himself on the back. For evidence I would like to see a testable prediction made before the fact.
In this particular case, Krugman’s original position was that the stimulus could be useful but was not sufficient. There is enough wiggle room for two elephants there—if the stimulus failed, Krugman would have pointed at himself saying it was insufficient and if it obviously worked, he would have pointed at himself saying it would be useful.
In general, I find Krugman to be an interesting example of a very smart guy who either became mindkilled or deliberately decided to do propaganda “for the greater good”. His columns are full of classic motivated reasoning.
As to Keynesianism, see this keeping in mind Yvain’s recent Beware the Man of One Study.
There was that time when he predicted that fiscal tightening in 2013 would be refute his ideological opponents, and then … totally failed to admit he was wrong when the evidence came out against him.
Saudi Arabia old oil and the US frackers are currently in a price war. Oil is cheaper for every importer, not just for those who happen to use the U.S. dollar as their domestic currency.
Also, both are trying to screw the Russians(who are dependent on oil money).
Russia hasn’t nearly as negatively impacted as Canada, so far. Look at the Canadian dollar plummeting compared to USD. I always thought Russia was more known for natural gas than oil, granted I haven’t researched that at all.
Seriously, compare Russia to where they were 10 years ago, then do the same for Canada. Stuck in Western media talking points & cold war mindset
The Canadian dollar was falling well before the current oil price drops. WTI peaked around $108 in June 2014, but the CAD has been falling fairly steadily since Sept 2012, when it was over $1 USD. Yes, the most recent fall has been happening at the same time as oil prices have been falling, but it’s been falling at about a cent a month, compared to about half a cent a month it was falling for the two years before that(when oil prices were basically flat).
Russia is better known for gas, because gas is harder to ship, and more dependant on pipelines—oil can be shipped by tanker or rail more easily. As such, if Europe gets Russian oil cut off it can buy from the Saudis, but if Russian gas gets cut off, they have many fewer options, and a cold winter. That said, LNG tankers are getting more common(largely to take advantage of the cross-Atlantic arbitrage between the frack-happy Americans and the enviro Europeans), and will alleviate that problem somewhat.
Gas prizes are pegged to oil prices.
Like chaosmage said, oil is getting cheaper in all currencies (at least the ones not experiencing hyperinflation). Thus it isn’t related to the country’s economic policy.
Also if you update against “Austrianism” every time the price of oil drops, do you update in it’s favor every time it rises?
As a general rule, doomsayers are rather silly. Doom not coming to pass is the expected result, and shouldn’t surprise you.
For weak effects your position holds, but for stronger effects you should duly consider anthropic effects. http://en.wikipedia.org/wiki/Anthropic_principle
Good to keep in mind, but not applicable here.
I want to open up the debate again whether to split donations or to concentrate them in one place.
One camp insists on donating all your money to a single charity with the highest current marginal effectiveness. The other camp claims that you should split donations for various reasons ranging from concerns like “if everyone thought like this” to “don’t put all your eggs in one basket.” My position is firmly in the second camp as it seems to me obvious that you should split your donations just as you split your investments, because of risk.
But it is not obvious at all. If a utility function is concave risk aversion arises completely naturally and with it all the associated theory of how to avoid unnecessary risk. Utilitarians however seem to consider it natural that the moral utility function is completely linear in the number of people or QALYs or any other measure of human well-being. Is there any theoretical reason risk-aversion can arise if a utility function is completely linear in the way described before?
In the same vein, there seems to be no theoretical reason for having time preference in a certain world. So if we agree that we should invest our donations and donate them later it seems like there is no reason to actually donate them at any time since at any such time we could follow the same reasoning and push the donation even further. Is the conlcusion then to either donate now or not at all? Or should the answer be way more complicated involving average and local economic growth and thus the impact of money donated now or later?
Let the perfect not be the enemy of the good, but this rabbit hole seems to go deeper and deeper.
Your utility function need not be completely linear, just locally linear. If your utility function measures against the total good done in the world, your effect on the world will be small enough to be locally linear
Most people don’t want to optimize the total good done, but instead care about the amount of good they do. People donate to charity until the marginal utility they derive from purchasing moral satisfaction falls below the marginal utility they derive from purchasing other things. In this case, diversification makes sense, because utility you assign to good you’re responsible for is very non-linear.
If you’re giving to charity at all, that’s awesome. Do what motivates you.
Interesting answer. Seeing as my personal giving is completely out of pleasure not some kind of moral obligation, the argument for diversification is very strong.
Ah. Well, then there doesn’t seem to be anything to debate here. If you want to do what makes you happy, then do what makes you happy.
The theoretical question still stands.
...then, what? Every dollar of charity would be most efficiently invested and we’d solve the highest impact problems before moving on to others?
What happens if everyone thinks like this?
Diminishing returns. Giving $100M to a $10M problem because of information lag doesn’t actually allocate money nearly as efficiently as you might hope.
Do we have any evidence about the severity of that information lag? I can’t imagine it would be much more than a year. I don’t know if this is a legitimatize concern. It may very well be worth a little waste value from info-lag for the benefit of overall efficiency.
Are we talking about lag within the community of people who read Givewell regularly, or in the broader community? Because that might be right for the former, but the latter has a lag that I expect to be measured in at least decades. Possibly centuries, given how much money still goes to churches.
I don’t think people who are giving their charity to churches are trying to give to the most efficient charity.
This is sort of the point. The more one is concerned with efficiency (like those who are concerned about the marginal benefit of a great charity vs. the best charity), the less an info-lag will have an effect on him/her. Effective Altruists who are thinking at the level of “I should only give to one absolutely best charity, to maximize marginal value”, are not likely to pick a charity and give to it blindly, never again looking at the least bit of evidence of marginal effectiveness.
I’m willing to wager that the marginal value lost due to an info lag when everyone is giving to only the best charity and aren’t paying enough attention, is vastly outweighed by the value produced due to everyone giving to the absolute best charities.
Sure, everyone should at a minimum aim to be very near the top, and the crowd who’d ever benefit from this discussion is the same crowd that’s least likely to suffer from lag. I’m attempting to make a theoretical case, not saying “Don’t donate to the best charities!”.
You get similar results by each individual giving to a different charity. As-is, there’s a few charities that are far above the rest, but if everyone thought like an effective altruist, that wouldn’t be the case. Consider the idea: if everyone bought the best TV then the factory that makes that model of TV would be inundated with orders.
That’s true but for now most of the problems are so large that even if almost everyone started thinking this way it would take a while before any high priority problem hit the point that it was no longer the best to donate to. Enough time for people to switch over to the next most important.
I believe donating to the best charity is essentially correct, for the reason you state. You won’t find much disagreement on Less Wrong or from GiveWell. Whether that’s obvious or not is a matter of opinion, I suppose. Note that in GiveWell’s latest top charity recommendations, they suggest splitting one’s donation among the very best charities for contingent reasons not having to do with risk aversion.
If you had some kind of donor advised fund that could grow to produce an arbitrarily large amount of good given enough time, that would present a conundrum. It would be exactly the same conundrum as the following puzzle: Suppose you can say a number, and get that much money; which number do you say? In practice, however, our choices are limited. The rule against perpetuities prevents you from donating long after your lifetime; and opportunities to do good with your money may dry up faster than your money grows. Holden Karnofsky has some somewhat more practical considerations.
This depends on what someone believes the worthiest target for their donations is. If they’re trying to optimize for goals that require the continuance of Earth-originating intelligence, or humanity, than they’ll probably want to prevent human extinction. If they believe humanity faces a Great Filter or an existential risk in the coming decades, they’ll want to donate eventually. Supposedly they’d drive money to organizations will decrease the chances of humanity being destroyed. Of course, this still leaves the consideration that if they invest very well, or invest at the right time, (and know how to identify those things), such that money invested now for period t, increased to value n(x), will be worth more to the target organization at the end of period t, than the original value of money, x, is worth to the organization now.
However, its been argued from within effective altruism that the longer you wait to do good, the less the good you do will be worth. A donation is worth more now than later; effort applied at a later time will result in less value than the same level of effort applied now. This is called the haste consideration. Over time, the marginal increase in value of an investment over time comes up against the (presumed) marginal decrease of donation at any given time. I figure one could try measuring or calculating the rates of change here, and then figuring out some point at which the curves of them cross which would provide the optimum time to withdraw money from investment and donate it for maximum impact. However, that seems difficult. I’m not aware of something like this previously being done, and I follow effective altruism closely. So, if this has all been quantified, that report hasn’t been widely circulated.
This seems all the more important on LessWrong, because users on this site are more likely to care about existential risk reduction, whether they identify with effective altruism or not. Also, one could estimate a time past which investing rather than donating would be pointless, because that is beyond the point at which one expects humanity to preserve anything worth either investing in or donating to.
Isn’t it the case that most investment opportunities have essentially the same expected returns, due to market efficiency? In that case you want to diversify, since you can lower the variance without lowering the expected return. But if you can identify a single giving opportunity that has a significantly higher expected return than the alternatives, then it seems like you’d want to concentrate on that one opportunity.
Most people give to charity because it makes them feel good—knowing you’re helping people is a warm fuzzy feeling that most people enjoy. Obviously this can lead to irrationality pretty easily—look at the ineffective charities kept alive by nice narratives—but if we take that as the base reason, then standard human loss aversion can explain splitting. Your goal isn’t to improve the world per se, but instead to have your money improve the world. In other words, the argument about linearity of utility disappears, because one bad decision will destroy all the value you get from charity, and since that value is partially independent of the expected value of the good done in the world, this can happen even if you’re investing in the charity with the highest EV.
I don’t 100% agree with this, but it’s fairly close to my gut feeling—I split my political donations, but not my humanitarian donations.
“The built-in features of our brains, and the life experiences we accumulate, do in fact fill our heads with immense knowledge; what they do not confer is insight into the dimensions of our ignorance. As such, wisdom may not involve facts and formulas so much as the ability to recognize when a limit has been reached. Stumbling through all our cognitive clutter just to recognize a true “I don’t know” may not constitute failure as much as it does an enviable success, a crucial signpost that shows us we are traveling in the right direction toward the truth.”
What is the minimum karma for posting to Discussion?
No idea, but you certainly should have enough.
I was asking on behalf of a friend who has a good essay—I wanted to bring him into the online community by encouraging him to post it to discussion.
By the way, based on some old bug tickets, I think the answer is 2.
Horrible news!!! Organic molecules have just been found on Mars. It appears that the great filter is ahead of us.
We’ve known for some time that Titan has plenty of organic molecules.
This only means that the great filter is not due to the difficulty of creating organic compounds. (In fact, creating organic compounds was already very low on a list of things that might be the cause of an early filter, or maybe even already effectively ruled out.).
It still could be a step between this and where we are now. For example, it could be the creation of Eukaryotes. Or it could be intelligence. Or other things.
The article says:
There’s also matter exchange from earth to mars that could have brought life that originated on earth to mars.
Yeah, I know that there are other filters behind us, but I just found it as a funny coincidence while I was in the middle of the facebook discussion about Great Filter and someone shared this Bostrom’s article .
In addition to the point made by ChristianKI that this may be non-biological, the origin of life is not the only possible Great Filter aspect which could be in our past. Other candidates for example include the rise of multicellular life, the development of complex brain systems (this one is not that likely since it seems to have developed multiple times in different lineages), the development of fire and the development of language.
What’s horrible about that?
We don’t know whether the Great Filter is ahead of us or behind us. The more evidence we find that life is common throughout the universe, the more the probability mass moves towards “ahead of us”, because more “behind us” possibilities have been eliminated.
Anything other than methane which is the simplest organic molecule there is (CH4) and, as far as I remember, has been detected in interstellar gas..?
Detecting methane per se isn’t what’s interesting about this. You don’t need life to produce methane; there’s plenty of it in the outer solar system, Titan for example is covered in the stuff, and it’s been detected on Mars before. But it’s a small enough molecule that Mars daylight temperatures can give it enough velocity to escape from the planet’s gravity well, which means that if you detect nontrivial quantities in Mars’ atmosphere it’s probably being actively replenished somehow. Fluctuating levels of methane, which is what’s actually new about the Curiosity measurements, are strong evidence for active replenishment.
This doesn’t necessarily mean life—there are other possibilities involving deep geology of various kinds—but life is one of the candidate explanations. Though if it is life, it’s probably simple, boring microbial life similar to what appeared quite early in Earth’s history, so I think artemium is overselling the Great Filter implications on a couple of levels.
Sure, but there are tons of geological processes which produce methane without any involvement of life.
The discovery of methane spikes is certainly interesting, but to go from there to changing the Great Filter estimates is a looooooong jump.
I don’t disagree.
Uh. . .
Martine Rothblatt: She founded SiriusXM, a religion and a biotech. For starters.
http://www.washingtonpost.com/lifestyle/magazine/martine-rothblatt-she-founded-siriusxm-a-religion-and-a-biotech-for-starters/2014/12/11/5a8a4866-71ab-11e4-ad12-3734c461eab6_story.html
I don’t see how you can “found” a religion and know that it will stick around after your death, or in Rothblatt’s case, after you go into cryo. Cults have to develop and thrive organically several generations after their respective founders’ lives before they deserve description as new religions. This happened to Joseph Smith’s Mormonism, for example, though I doubt anyone at the time could have predicted that the cult would remain sustainable after Smith’s lynching in 1844.
Given the True Death of Nathaniel Branden earlier this month, and how American libertarianism originated as a kind of Baby Boomer phenomenon in the 1960′s and 1970′s which elevated Branden into one of its preceptors, I wonder how this political subculture will look in the coming years as its Boomer adherents continue to die off, along with their enthusiasm for fringe writers and intellectuals like Branden, Ayn Rand, Ludwig von Mises, Murray Rothbard, etc. I doubt that these writers will have much name recognition in 50 years, or that they will receive acknowledgement for having insights into the human condition that we can’t live without.
We’ve seen this happen to trendy intellectuals before, where one generation’s heavy thinker becomes the next generation’s obscure crank your grandfather might have told you about. Like the philosopher Hegel in the 19th Century, for example; or the Polish aristocrat Alfred Korzybski, who invented the General Semantics idea which made a huge impression on science fiction writers in the middle of the 20th Century; or Buckminster Fuller, for yet another example.
Of course, this can work both ways, where someone stumbles upon an important insight that few others recognized at the time, only for the appreciation to come long after his death. This has happened to the Reverend Bayes.
I get the impression that transhumanist intellectuals have a short shelf-life as well, unless something really breaks loose and we start to see transhuman weirdness in our daily lives. Otherwise, we’ll enter into the era of “FM who?” “Robert Anton who?” “R.U. who?” and some other people I could name.
You’ve chosen a strange grouping to represent American libertarianism. When I think of American libertarian intellectuals of the 1960s and 1970s, the first name that springs to my mind is Milton Friedman.
Predictions are hard, especially about the future, but I doubt that the names and works of the likes of Friedman, Buchanan, Coase, Becker, Fama and Lucas (and, yes, Rand) are going to be forgotten or fall into disrepute any time soon. They have given birth to entire schools of thought (e.g. ‘Chicago School’ economics, Law & Economics movement) and institutions (e.g. GMU). There also appears to be no shortage of current libertarian intellectuals, and there appear to be more libertarian or libertarian-influenced politicians than ever, at least in the Anglosphere.
You appear to give no reasons for your predictions of decline for libertarianism or transhumanism. That is unfortunate.
This passage from Wikipedia makes me doubt your description of Ayn Rand as a “fringe writer”:
This tallies with my own impression that Austrianism seems to be doing very well for itself and is hardly fading out.
Doing well for itself among “fringe people” or among people who matter (i.e. economists, other academics, legislators, central bankers)? My impression has been that it has found a devoted following among “internet people” who lean conservative and libertarian, but isn’t gaining influence in the government-academia complex.
Speaking of American libertarianism as a Baby Boomer fad, I don’t understand the late David Nolan’s grievances against the American political order. He grew up comfortably middle class after the Second World War, went to an elite university (MIT) and graduated with a political science degree, which suggests a lack of economic pressure to become employable. He also overdosed on Robert Heinlein’s novels growing up, a fairly common pathway for Baby Boomer libertarians, it seems. Yet he claims that Richard Nixon’s imposition of wage and price controls radicalized him politically somehow, and led him to forming the Libertarian Party.
Uh, WTF? Hardly anyone remembers those wage and price controls now, and they certainly didn’t cause an economic disaster that required forming a new political party. Franklin Roosevelt’s presidency imposed far more strenuous controls on the American economy during the Second World War, and the country seems to have recovered well from them after the war ended without radicalizing people into libertarians.
It gives me the impression of that generation of American libertarians as somewhat spoiled guys who didn’t appreciate how good they had it.
The American political order in the 1970s was very different from the present day’s. Nixon’s wage and price controls were the straw that broke the camel’s back, in this regard. It’s not surprising that a classically liberal party would be founded in response to them, although ultimately this was a minor factor, if one at all, in shifting prevailing views in such matters.
The issue isn’t economic disaster but not acting according to certain ideals.
Roosevelt was a Democrat. Anybody who wanted less price controls then Roosevelt could go to the Republicans. Richard Nixon was a Republican. Going to the Democrats wouldn’t helped, so he needed a new party.
I’d say the wage and price controls were more important as evidence that the Republican party had abandoned capitalism. Unlike with Roosevelt, that left no major party for capitalists to support.
Dale Carrico:
Robot Cultists Looking for “Conscientious and Discreet” Helpmate for Guru
http://amormundi.blogspot.com/2014/12/robot-cultists-looking-for.html
It sounds as if Bostrom, a man who lives in the 21st Century, wants someone like a 19th Century “servant.” Does he feel the need to keep up appearances with the Anglo-Norman Brits at Oxford?
I wonder what Dale will do with the Harper’s article on “robot cultists.”
Oh, don’t be daft. Executive assistants are a completely normal XXI century job. Basically, he wants a secretary and there’s nothing unusual about it.