Those people may have a better chance of succeeding.
Fadeway
I’ve failed Uberman twice myself. You have pretty much an optimal plan, except for the naptation.
“Cut your naps down to 6 as quickly as you can without it hurting too much”.
From my own knowledge, which may or may not be trustworthy, naptation doesn’t need to be ended prematurely—the whole point is to get a huge number of naps in a short timeframe in order to learn to get REM in a 24-minute interval (which dreaming is a sign of). Getting a few more will just decrease your REM dep. The way I would do it is, get 12 naps a day until you find yourself unable to fall asleep for a nap at all—the critical thing is, you stay in bed until the alarm; you don’t just get up after ten minutes—and also take care that some people may have trouble falling asleep for naps at all, which is a separate issue. When you fail to fall asleep for a nap, that’s a sign that you’ve had enough and can’t sustain 12 a day any longer; either cut two naps or go straight down to 6 a day. I’d choose the latter.
Also, um, give beds a wide berth outside naptime. And get more than two alarms, preferably with one placed more than 10 meters away from the bed—the long walk to it and back will ensure you actually wake up in the process of turning it off.
I discovered this issue for myself by reading a similar article, and going through the same process, but with my third thought being “does that guy [the Prime Minister in this story] really believe this thing that I believe [in this case, pro-choice]?” I think he’s bad because he broke the rules, then I forgive him because he’s on my side, then for one reason on another I start to wonder if he really is on my side...and notice that I’m trying to decide whether to blame him for breaking the rules or not. (I think this is because I myself use irony a lot, so often when I hear a statement that is in some way ambiguous or silly, I reflexively ask myself if it is sincere or sarcasm, even in a situation where irony would be unacceptable/unthinkable, as is the case with a public statement)
I’m not sure how many times this happened to me before I noticed, but nowadays I just think “broke the rules, −10 points even though I like this guy”, and then, “oh and he agrees with me, gotta increase his score for that”.
Google never fails. The chart shall not allow it.
Sounds like a fun ritual. Makes me wish I were in Boston so I could attend.
I’ve doubted his process from the start—I remember reading a third person’s comment that pointed out he had forgotten to add iron—and his subsequent reply that this mistake was the cause of his feeling bad. I know nothing about nutrition (except that it’s not a very good science, if it’s science at all), yet iron is obvious even to me. To miss it shows that he didn’t really do much double checking, much less cross-referencing or careful deliberation of the ingredient list.
I’m really hopeful about Soylent—I’d even jump in and risk poisoning to test it myself, if I were living alone. If anything, this experiment highlights how untrustworthy and limited our dietary knowledge is (and should motivate us to improve it). If this fails due to a new form of scurvy, the cause can be found and the experiment retried. If it fails due to not having read information that’s already out there, well, that’s a downer.
I’ve read a significant amount of your essays/articles and love the stuff. It’s kinda hard to track for new stuff since the RSS feed tends to dump dozens of small changes all at once, so this post is much appreciated.
Is it useful to increase reading speed, even if it takes a minimal amount of time (to go from basic level to some rudimentary form of training)? I’ve always been under the impression that speed increases in reading are paid for with a comprehension decrease—which is what we actually care about. Or is this only true for the upper speed levels?
What was the name of that rule where you commit yourself to not getting offended?
I’ve always practiced it, though not always as perfectly as I’ve wanted (when I do slip up, it’s never during an argument though; my stoicism muscle is fully alert at those points in time). An annoying aspect of it is when other people get offended—my emotions are my own problem, why won’t they deal with theirs; do I have to play babysitter with their thought process? You can’t force someone to become a stoic, but you can probably convince them that their reaction is hurting them and show them that it’s desirable for them to ignore offense. To that end, I’m thankful for this post, upvoted.
I agree, you can get over some slip-ups, depending on how easy what you’re trying is compared to your motivation.
As you said, it’s a chain—the more you succeed, the easier it gets. Every failure, on the other hand, makes it harder. Depending on the difficulty of what you’re trying, a hard reset is sensible because it saves time from an already doomed attempt, >and< makes the next one easier (due to the deterrent thing).
I disagree. This entire thread is so obviously a joke, one could only take it as evidence if they’ve already decided what they want to believe and are just looking for arguments.
It does show that EY is a popular figure around here, since nobody goes around starting Chuck Norris threads about random people, but that’s hardly evidence for a cult. Hell, in the case of Norris himself, it’s the opposite.
If you want to get up early, and oversleep once, chances are, you’ll keep your schedule for a few days, then oversleep again, ad infinitum. Better to mark that first oversleep as a big failure, take a break for a few days, and restart the attempt.
Small failures always becoming huge ones also helps as a deterrent—if you know that that single cookie that bends your diet will end up with you eating the whole jar and canceling the diet altogether, you will be much more likely to avoid even small deviations like the plague, next time.
God. Either with or without the ability to bend the currently known laws of physics.
This was my argument when I first encountered the problem in the Sequences. I didn’t post it here because I haven’t yet figured out what this post is about (gotta sit down and concentrate on the notation and the message of the author and I haven’t done that yet), but my first thoughts when I read Eliezer claiming that it’s a hard problem were that as the number of potential victims increases, the chance of the claim being actually true decreases (until it reaches a hard limit which equals the chance of the claimant having a machine that can produce infinite victims without consuming any resources). And the decrease in chance isn’t just due to the improbability of a random person having a million torture victims—it also comes from the condition that a random person with a million torture victims also for some reason wants $5 from you.
Where is the flaw here? What makes the mugging important, despite how correct my gut reaction appears to me?
The point is that a superhero can’t take preemptive action. The author can invent a situation where a raid is possible, but for the most part, superman must destroy the nuke after it has been launched—preemptively destroying the launch pad instead would look like an act of aggression from the hero. And going and killing the general before he orders the strike is absolutely out of the question. This is fine for a superhero, but most of us can’t stop nukes in-flight.
A dictatorship is different because aggression from the villain is everywhere anyway—and it’s guaranteed that we will be shown at least one poor farm girl assaulted by soldiers before our hero takes action against the mastermind. Only when the villain is breaking the rules egregiously and constantly is the hero allowed to bend them a bit.
If you have a situation with both an antihero and a hero in it, the hero can be easily predicted—as opposed to the antihero,who is actually allowed to plan. Superheroes end up quite simple, since the rules they obey are so strict, they can only take one course of action (their choices tend to be about whether they follow the rules or not, and not between to courses of action that are both allowed). And that course of action often isn’t the most effective.
I can definitely agree with 5, and to some extent with 3. With 4, it didn’t seem to me when I read this months ago that the Superhappies would be willing to wait; it works as a part of 3 (get a competent committee together to discuss after stasis has bought time), but not by itself.
I found it interesting on my first reading that the Superhappies are modeled as a desirable future state, though I never formulated a comprehensive explanation for why Eliezer might have chosen to do that. Probably to avoid overdosing the Lovecraft. It definitely softens the blow from modifying humanity’s utility function to match their own.
You definitely hit the nail on the head with 5. Finding the other guy’s pain and highlighting it, as well as showing how your offer helps what they actually care about, is both a basic and a vital negotiation technique. Call me when I’m organizing the first contact mission; I might have a space diplomat seat ready for you.
What do you mean, specifically? “Having fun” aside, being emotional about a game is socially harmful/uncool in the same way a precommitment can be.
-Hanlon’s razor—I always start from the assumption that people seek the happiness of others once their own basic needs are met, then go from there. Helps me avoid the “rich people/fanatics/foreigners/etc are trying to kill us all [because they’re purely evil and nonhuman]” conspiracies.
-”What would happen if I apply x a huge amount of times?”—taking things to the absurd level help expose the trend and is one of my favourite heuristics. Yes, it ignores the middle of the function, but more often than not, the value at x->infinity is all that matters. And when it isn’t, the middle tends to be obvious anyway.
When you mentioned compartmentalization, I thought of compartmentalization of beliefs and the failure to decompartmentalize—which I consider a rationalistic sin, not a virtue.
Maybe rename this to something about remembering the end goal, or something about abstraction levels, or keeping the potential application in mind; for example “the virtue of determinism”?
Didn’t predictions for the Singularity follow a similar trend? Older people predicting 30-40 years until the event, and younger predictors being more pessimistic because they’re likely to still be alive even if it happens in 60 years?