Open thread, Aug. 17 - Aug. 23, 2015
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Are there major points that MIRI considered to be true 5 years ago but doesn’t consider to be true today?
Eliezer is the only staff we still have around from 2010, and I’m not sure what he’d say his biggest updates have been. I believe he’s shifted significantly in the direction of thinking that the best option is to develop AI that’s high-capability and safe but has limited power and autonomy (e.g., Bostrom’s ‘genie AI’ as opposed to ‘sovereign AI’), which is interesting.
I came on at the end of 2013, so I’ve observed that MIRI staff were very surprised by how quickly people started taking AI more seriously and discussing it more publicy over the last year—how positive the reception to Superintelligence was, how successful the FLI conference was, etc. Also, I know that Nate now assigns moderate probability to the development of smarter-than-human AI systems being an event that plays out on the international stage, rather than taking most of the world by surprise.
Nate also mentioned on the EA Forum that Luke learned (and passed on to him) a number of lessons from SIAI’s old mistakes:
Aside from the impact of FLI etc., I’d guess MIRI’s median beliefs have changed at least as much due to our staff changing as due to updates by individuals. Some new staff have longer AI timelines than Eliezer, assign higher probability to multipolar outcomes, etc. (I think Eliezer’s timelines lengthened too, but I could be wrong there.)
Is this a new bias? I haven’t seen it mentioned before. Abstract (emphasis mine):
Perhaps you could expand it and post to discussion, so it can be found by tags? I seem to remember a passage in SSC about good/poor calibration and high/low probabilities, in that recent post about internet communities, freedom and witch migration...
sounds like cross contamination of anchoring effects.
Although anchoring works with the presence of irrelevant numbers too.
A quick update: My astrobiology posts are still coming, but after conferring with Toggle and being surprised with a massive response to my first blog posts (who linked me onto ycombinator? thousands of views in two days) I am going through a bit more research about the geochemical history of Earth and other solar system objects in an attempt to be as rigorous as possible. I only play an astrobiologist on TV so to speak, day to day my focus is a little smaller.
The marketplace of ideas wants you to change focus.
Hah, believe me I have been looking into ways to get professionally involved in astrobiology once my PhD is done (gimme 18 months, cracking fascinating problems in the study of metabolic regulation and systems bio here). It’s pretty hard to break into, though there was that massive gift for SETI from that Russian billionaire recently… it’s either that, going into metabolic engineering, or continuing with basic Eukaryotic cell biology for me most likely.
I like this part from a MIRI blog article:
When people insist on using fictional evidence, at least give them one that matches your concerns.
(It will probably also help that Kind Midas is higher status than Terminator.)
Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases by Scott O. Lilienfeld1, Katheryn C. Sauvigné, Steven Jay Lynn, Robin L. Cautin, Robert D. Latzman and Irwin D. Waldman
If I wanted a news feed I would be on a news website. This does not belong in an open-thread.
make an account (of your own if you plan to keep posting in this way)
actually make comments; not just quoting the contents of links
start a link-thread if you must (edit: there is a media thread)
That comment is something of a non sequitur to me. Abstracting recently published papers doesn’t strike me as a central example of a news feed, and I’m not sure why such abstracts don’t belong in our open threads (granted, those abstracts might be better suited to the media thread).
I don’t appreciate having information thrown at me without explanation. I visit the open thread (And LW) to see what people have to say. Unless the user is the publisher of the paper, they should be including their own reason for sharing; some kind of explanation of why this copy-paste is better than drivel found anywhere else on the internet.
I believe this lowers the standard of the place we have here to have an anonymous user posting in this way.
Multiple solutions:
get an account
stop posting
include comments or analysis
start a new thread for it so it doesn’t cloud up the OT
The Open Thread is for all posts that don’t necessitate their own thread. The media thread is for recommendations on entertainment. I don’t see why comments should be necessary to bring a paper to LWs attention, especially in the Open Thread, and others clearly disagree with your opinion on this matter.
There is a combination of multiple factors that I have an issue with.
anonymous user
multiple posts
no commentary or discussion started
link outbound
repeated process each week
Alone I have no problem with any of these things (or a few of them); together they add up to; “that which by slow decay” destroys the nature of Lesswrong as we had it before.
There are multiple reasonable solutions to this problem; which include:
me leaving Lesswrong for greener pastures.
changing any of the 5 things:
making an account
posting less each week
starting a discussion
not linking outbound (but also not starting content that links outbound)
not doing it every week
But I would like the 5 changes to be tackled first before I leave.
TL;DR: what actual useful outcomes are there of political discourse for people who don’t have political advocacy as an area of comparative advantage?
I think the ability to discuss politics clearly is one of the biggest successes of rationality, perhaps because politics breaks minds so thoroughly. It is ironic then, that it’s not obvious whether this is of any use whatsoever to the majority of us. I don’t believe that political advocacy is my area of comparative advantage, and with the general aspie nature of LWers I would assume this probably applies to most of us. Indeed, the main effect of thinking rationally about politics that I can’t stand non-rationalist politics anymore. Having the correct political views is of some small benefit, although since no general election has ever been won or lost by one vote, its very unlikely to change anything.
Are there other benefits to having accurate political views? Some people, largely on both the far left and the far right think that total social collapse is likely within a few decades. I think they might be partially right—for instance, in Europe previously fringe parties are becoming mainstream, both communists protesting against austerity and nationalists protesting against immigration. In Greece, actual neo-Nazis have won seats. If there is another financial crash happens (which again, I think is likely, quite probably before the end of the year) then as countries go totally bankrupt because they are still in debt from the last crash, we will see extremist parties getting into power, which probably make the situation worse (not that communism and nationalism will always cause disasters, but I don’t trust panicking electorates to vote in people who can do it properly, and if half the population goes right and half goes left this causes more problems like riots).
Anyway, I seem to have digressed, but my point is, if one can predict that society is going to collapse, would that be useful? Not really, as far as I can tell. Its more difficult to make money from a collapse than from a boom, and the timescales are too long anyway. If you hold the converse opinion, that the future will be ok, then the advice that leads to is simply “buy indexes” which is standard advice anyway.
If I decided that society is likely to collapse, is that useful because it gives me warning to get out while there is still time? Not really—only the really paranoid people think that things are going to get really bad in the immediate future. Even if I thought there was likely to be total collapse, and outcome so bad that I need to leave, as opposed to the far more likely outcome of general stagnation, the predictions are still decades in advance, and one does not need that much time to prepare to leave.
Is there some way politics could be useful applying to the way I interact with other people? Some people might say that its important to put aside subconscious prejudices everyone has. But, generally I would say that subconscious prejudices are just wrong on a probabilistic level—even if some characteristic does vary between group A and group B of people, one is far better off just measuring the characteristic directly, in most cases.
I think some of what I’ve written pattern-matches to being slightly crazy, but what I am trying to ask is whether there is any benefit to having accurate views about politics. Personally, I think I spend too much time thinking about politics because I find it interesting, and it feels like I’m doing something useful. But maybe this feeling of being useful is an illusion, and thinking about politics is no more useful than watching TV.
I think it depends on how widely do you understand “politics” and how closely do you interact with power structures. Small businesses, for example, care a lot about local politics. On a certain level “Are cops my friends?” is a political question. If you have a choice of states or countries to live in, politics often matter. If you are in academia, politics matter.
On the other hand, yes, in the first world you can probably live a successful life without caring a whit about politics. I wouldn’t recommend this in the third world (including everything east of Germany).
Thinking about something is always more useful than consuming content :-/
I was more thinking about the ideological level, as opposed to office politics, for instance. The two are not entirely distinct—social justice movements are concerned with office politics, for instance. But I think to find out more about office politics I would want to think more about something like ‘the 48 laws of power’ rather than knowing more about ideology.
I think I would make an exception for really good content, but generally, yes this is a good point, and since politics is so hard to think about clearly, perhaps it is especially good for rationality training.
I think it is useful. And the usefulness may be hard to measure because it manifests in pitfalls that you didn’t fall into, ignorant decisions that you didn’t make, blights on your imagination that you didn’t have, prejudices that you didn’t express.
It’s like that when people do preventative work of any kind. Their work is often not recognized or noticed.
On the other hand political advocacy for your particular interests is part of your comparative advantage because no one else is going to do it for you.
A side note, but my immediate reaction to reading that was “that’s overconfidence”, though it’s quite possible I’m interpreting “financial crash” more grandly than you intended. If I define it — just to try to be more concrete, albeit still pretty fuzzy — as an event as bad as the 2007-8 financial crisis, I’d put a probability ≈5% on that happening by 2015′s end.
(Adding the conjunction that explicitly nationalist and/or communist parties gain power actually wouldn’t reduce my probability that much, because that already happens; see the Scottish Nationalist Party winning 56 of 59 Scottish seats in this year’s general election, and AKEL’s performance in Cyprus’s 2006 & 2008 elections.)
It may be worth noting that the SNP is quite unlike the far-right nationalist parties I think skeptical-lurker is worried about. They’re a party of the left rather than of the right, they aren’t outrageously far to the left (at least by European standards), and their “nationalism” is all about getting out of the UK rather than about keeping out immigrants, throwing out black people, and other such traditional policies of nationalist parties.
Indeed. (And my impression is that Demetris Christofias wasn’t much of a stereotypical communist as president of Cyprus, either; he seems to’ve concentrated on the dispute with Turkey.) I was trying to also illustrate another knotty aspect of rigorously operationalizing a broad political prediction like this: which precise kinds of nationalist/communist party gaining power would count as fulfilling the prediction?
I’m basically claiming that I know better than the market, so I’m surprised you’re the first person to call me on it. Rigorous definitions are complicated by the fact that the 2007-8 financial crisis took about a year to bottom out—even if the next crash started right now, it (probably) wouldn’t bottom out by the end of the year. I’d say around 50% prob of a crash of similar impact as 2008 starting by the end of the year. Perhaps I should get a predictionbook account and register my unusual predictions.
Yes, predictions about nationalism/communism are simple extrapolations. I wish to emphasise that I don’t think this is necessary a disaster—the SNP are not your stereotypical nationalists, and are not a huge source of concern IMO, and Japan is an ethno-nationalist state which seems to be functioning perfectly fine.
I do find it surprising/worrying that the frontrunner for the leader of the opposition in the UK is a socialist talking about encouraging growth by printing money, leaving NATO, funding homoeopathy, legislating against toy soldiers, etc., and is mostly rejected by his own party for being too left-wing.
To be fair, I didn’t look at the markets either. I asked my gut for a reference class forecast (“How often might I expect this kind of crisis to come along in general...?”) while trying to remember my impressions of what economic commentators have said about potential bubbles & such. (I used this sort of tactic to guess rough, first-cut probability estimates in the Good Judgment Project, where I did OK.)
Yeah, that was part of the reason my probability was as low as it was. Actually, reflecting on it a tiny bit more rigorously, I’ll nudge it down further to maybe 3%, because I originally rounded “before the end of the year” to “half a year” in my head, but the rest of 2015 is more like a third of a year.
With a looser prediction window like “beginning by the end of 2015, but hitting bottom before the end of 2016”, I’d raise my “maybe 3%” to “maybe 12%”.
I had to Google that one!
Edit to add: regardless of the probability disagreement, I did upvote you for elaborating.
The financial markets allow you to bet on such things. If you actually believe that 50% probability, you can place a bet in the markets at what would look to you like VERY favourable odds.
Would this involve shorting stocks? That seems dodgy, given that the default is for stock to rise. And if my probability distribution is about 50% crash, 40% surge upwards, 10% stability, then shorting isn’t a particularly good idea.
Or maybe its time to buy some bitcoin...
You can express quite complicated views in financial markets.
That distribution doesn’t look like “the default is for stock to rise” :-) Besides, a 2008-magnitude crash would probably drive the equity markets to what, half of their current value? You can’t expect a “surge upward” to give you symmetrical gains on the upside, so your expected distribution is very lopsided.
Let’s make things even simpler—you believe that in half a year we’ll end up in one of two states—either, with a 50% probability, the equity markets will crash to half price, or, with a 50% probability, the markets will stay the same or go up. If this is so, you can take a position in the markets which is guaranteed to make you money, either a lot (in the first case) or a little (in the second case). Guaranteed, that is, if the markets will do one of these two things.
Its not—stocks rising is the prior, this is my posterior.
Exactly what instruments would guarantee this return? A mixture of shorts and options?
The simplest would be a put spread. You would buy deep out of the money puts and write (sell) other OOM puts much closer to the current price. You will buy more deep OOM puts than sell shallow OOM puts, however because you will be buying cheap puts and selling expensive ones you will start with a positive balance. In the case of markets going nowhere or up, both sets of puts expire worthless and you keep your difference in premium (you earned a little money). In the case of the crash, both sets of puts expire in the money, but you are long more puts than you are short, so you make a lot of money.
Where are all the vids of the speeches and stuff from the EA global events? Surely they were recorded?
No, the cameras were sold and the money was sent to Africa.
(Just joking. At least I hope so.)
Planning fallacy most likely. I’d guess we should expect them after all three EAG events are done.
Are there any 2014 videos? I can find the 2013 keynotes here, but nothing since then.
If there were videos recorded and never posted for 2014, then the 2015 prospects look not so good.
2014 wasn’t recorded. There was a plan to that never materialized IIRC. 2015 was recorded professionally and August 13th was the original stated date for release. I’m just assuming that was way too ambitious for edit time.
I took part in the Good Judgment Project, a giant prediction market study from Philip Tetlock (of “Foxes and Hedgehogs” theory). I also blogged about my results, and the heuristics I used to make bets:
http://aarongertler.net/good-judgment-project/
I thought it might be of interest to a few people—I originally learned that I could join the GJP from someone I met at CFAR.
I love your heuristics! They could be all summed up as: the world has inertia.
What are you thought rituals when you want to optimise your cognitive processes when you don’t have much else to do. For instance, you are walking between office buildings.
For me, I’m going to phase in the following ritual and see if it’s helpful. Asking myself:
What was done effectively or what outcomes were useful today?
How might things have been improved or what different ideas do you have?
What questions arising from today might be relevant to tomorrow?
How do you feel about the process and content of your life today?
Have you been radically honest or dishonest today?
I have found that the single greatest cause of my forgetting things is being engaged in other things. So during short periods of down-time (under five minutes), I try to give over to non-directed thought. This has been very successful, and I am trying to expand it.
I practice mindfulness during these times, which seems to increase focus.
The good, the bad, and the ineffective: social programs in America
If I wanted a news feed I would be on a news website. This does not belong in an open-thread.
make an account (of your own if you plan to keep posting in this way)
actually make comments; not just quoting the contents of links
start a link-thread if you must (edit: there is a media thread)
Hi! This is my first post. I have a physics/MWI question, and I don’t know if LessWrong / Discussion is the right place to post it, so is it OK if I ask it here? Here it goes:
As any other amateur who reads Eliezer’s quantum physics sequence, I got caught up in the “why do we have the Born rule?” mystery. I actually found something that I thought was a bit suspicious (even though lots of people must have thought of it, or experimentally rejected this already.) Note that I’m deep in amateur swamp, and I’ll gleefully accept any “wow, you are confused” rejections.
Here is my suggestion:
What if the universes, that we live in, are not located specifically in configuration space, but in the volume stretched out between configuration space and the complex amplitude? So instead of talking about “the probability of winding up here in configuration space is high, because the corresponding amplitude is high”, we would say “the probability of winding up here is high, because there are a lot of universes here”. And here, would mean somewhere on the line between a point in configuration space and the complex amplitude for that point. (All these universes would be exactly equal.) And then we completely remove the Born rule. Of course someone thought of this, but responds: “But if we double the amplitude in theory, the line becomes twice as long, and there would be twice as many universes. But this is not what we observe in our experiments, when we double the amplitude, the probability of finding ourselves there multiplies by four!” This is true, if you study a line between the complex amplitude peak and a point in configuration space. But you are never supposed to study a point in configuration space, you are supposed to integrate over a volume in configuration space.
Calculating the volume between the complex amplitude “surface” and the configuration space, is not like taking all the squared amplitudes of all points of the configuration space and summing them up. The reason is that, when we traverse the space in one direction and the complex amplitude changes, the resulting volume “curves”, causing there to be more volume out near the edges (close to the amplitude peak) and less near the configuration space axis.
Take a look at the following image (meant to illustrate an “amplitude volume” for a single physical property): [http://www.wolframalpha.com] , type in: ParametricPlot3D {u Sin[t], u Cos[t], t / 5}, {t, 0, 15}, {u, 0, 1}
Imagine that we’d peer down from above, looking along the property axis. If we completely ignore what happens in the view direction, the volume (the blue areas) would have the shape of circles. If we’d double the amplitude, the volume from this perspective would be quadrupled.
But as it is, what happens along the property axis matters. The stretching out causes the volume to be less than the amplitude squared. It seems that, the higher the frequency is, the closer the volume is to have a square relationship with the amplitude, while as the frequency lowers, the volume approaches a linear relationship with the frequency. Studying the two extreme cases; with frequency 0 the geometric object would be just a straight plane, with an obvious linear relationship between amplitude and volume, while with an “infinite” frequency, the geometric object would become a cylinder, with a squared relationship between volume and amplitude. This means that the overall current amplitude-configuration-space ratio is important, but as far as I know, it is unknown to us.
In a laboratory environment, where all frequencies involved are relatively low, we would see systems evolving linearly. But when we observe the outcome of the systems, and entangle them with everything else, what suddenly matters is the volume of our combined wave which has a very very high frequency.
Or does it? At this point I’m beginning to lose track and the questions starts piling up.
What happens when multiple dimensions are mixed in? I’m guessing that high-frequency/high-amplitude still approaches a squared relationship from amplitude to volume, but I’m not at all certain.
What happens over time as the universe branches, does the amplitude constantly decrease while the length and frequencies remain the same? (Causing the relationship to dilute from squared to linear?)
Note that this suggestion also implies that there really exists one single configuration space / wave function that forms our reality.
So, what do you think?
What should you be doing right now if you believe that advances in AI are about to cause large-scale unemployment within the next 20 years (ignoring the issue of FAI for the sake of discussion)?
I think the standard answer is “acquire capital.” Figuring out which tasks are AI hard and then specializing in those is another possibility.
Not be “low-skilled”, obviously.
Oh, dear… From Marginal Revolution’s comment section:
The bitcoin mining computations are pretty provably meaningless—all it is is looking for hash collisions. If you want examples of convincing millions of people to donate their computing power for meaningful computation, with no financial incentive, look at folding@home or rosetta@home.
Oh, I know—this is a case of weird meme propagation, not of lizard overlords tricking humankind.
How do you decide whether to post in Open Thread, Discussion, or Main?
Mainly, for me is length of the article. If it’s more than two paragraphs, it goes into its own discussion thread.
Bentham’s Fallacies, Then and Now by Peter Singer
1824 edition of the book
If I wanted a news feed I would be on a news website. This does not belong in an open-thread.
make an account (of your own if you plan to keep posting in this way)
actually make comments; not just quoting the contents of links
start a link-thread if you must (edit: there is a media thread)
For the next two weeks I won’t be able to create the Open Threads precisely on monday morning, and with some probability I won’t be able to connect at all.
There needs to be someone willing to embark on this admittedly very easy task. Please comment below if you wish to wear the cape for the time being.
I will do it!
What is something that would profoundly surprise you?
If I came in to work tomorrow to discover a large oak tree rooted in my office.
I get the impression (maybe wrong) that most answers to this won’t be what you’re looking for, but I have no idea what you’re looking for.
I am profoundly surprised when people act in ways in which I don’t expect. Mainly because I have spent a long time building an internal model of how people around me work, (sorry I can’t really explain how it works)
Proof of god[s].
An interesting take on ethics.
I found SMBC to be the jester at the court of transhumanism, saying with humor and exaggeration some nuggets of truth… I enjoy it greatly.
I especially like the question “Is it ethical to steal truffle mushrooms and champagne to feed your family?” That’s an intuitive concept fairly voiced. Calculating the damage to the trolley is somewhat ridiculous, however.
What are your thoughts on the following Great Filter hypothesis: (1) Reward-based learning is the only efficient way to create AI. (2) AGI is easy, but FAI is hard to invent because the universe is so unpredictable (intelligent systems themselves being the most unpredictable structures) and nearly all reward functions will diverge once the AI starts to self-improve and create copies of itself. (3) The reward functions needed for a friendly reinforcement learner reflect reality in complex ways. In the case of humans they are learned by trial and error during evolution. (4) Because of this, the invention of FAI requires a simulation in which it can safely learn complex reward functions via evolution or narrow AI, which is time-consuming. (5) However, once AGI is widely regarded as feasible, people will realize that whoever invents it first will have nearly unlimited power. An AI arms race will ensue in which unfriendly AGIs are much more likely to arise.
I don’t see why an unfriendly AGI would be significantly less likely to leave a trail of astronomical evidence of its existence than a friendly AI or an interstellar civilisation in general.
I can think of three explanations, but I’m not sure how likely they are: Gamma ray bursts are exploding unfriendly AGIs (i.e. there actually is astronomical evidence), unfriendly AGIs destroy themselves with high probability (lack of self-preservation drive) or interstellar space travel is impossible for some reason.
If interstellar travel (and astroengeneering) is impossible, that is enough to explain Great Filter without additional assumptions.
Oops! That’s right.
Try this game!
How good you actually are at solving a NP problem by hand?
I’ve once had a homework problem where I was supposed to use some kind of optimization algorithm to solve the knapsack problem. The teacher said that, while it’s technically NP complete, you can generally solve it pretty easily. Although the homework did it with such a small problem that the algorithm pretty much came down to checking every combination.
Re: rationality books for children. Winnie the Pooh is great for confirmation bias!
Remote Exploitation of anUnaltered Passenger Vehicle by Dr. Charlie Miller and Chris Valasek
....
Yet another reason to love my faithful 2007 car, on top of the hatchback and the big knob dashboard controls I don’t need to look at to use with not a screen in sight.
If I wanted a news feed I would be on a news website. This does not belong in an open-thread.
make an account (of your own if you plan to keep posting in this way)
actually make comments; not just quoting the contents of links
start a link-thread if you must (edit: there is a media thread)
A few nutrition-related questions:
Why does Soylent 2.0 have so much fat? They appear to be going for 45% of calories from fat, whereas they typical recommendation is 10%-35%.
Why does the Bulletproof stuff include so much saturated fat? It appears that the consensus is that saturated fat significantly increases blood cholesterol and arterial plaque formation—curious why such a deviation here.
I want to model incentives, behaviour and agents. Are there any handy, intuitive, web-based tools for mechanism design?
Here is a theory (fox lens viewpoint) of why many people are under economic duress. The problem is not consumerism but producerism:
In the US there are many ambitious, hardworking people who are very excited about their work and career. Such people view work, achievement, and production as its own reward. They primarily value the intellectual satisfaction and social validation that comes from career success; money is less important.
However, even though they don’t necessarily value the money that highly, they still get a lot of it, because they are typically quite successful (other things being equal, people who value achievement highly tend to achieve a lot). Because they have obtained quite a bit of money without directly aiming for it, they don’t spend a lot of time thinking about how to conserve it. Such a person’s inner monologue might run something like this: “I’ve got to get a job a Google/Facebook/Amazon/Goldman, they’re the best in the business, and I want to work with the best people and change the world!… (person works hard to get a relevant degree from a top school, does a lot of networking and side projects, etc etc and finally lands the dream job) … Okay sweet, now I’m here! This is awesome, now I’ve got to hit a homerun on this project… oh right and I’ve got to buy some clothes and a car… how much will that be? $30K....? Seems pricey, but I’m making $150K, so it’s no big deal …”
The point is, because these people have lots of cash that they don’t care too much about conserving, they drive prices up for everyone else. Consumer companies orient themselves towards the people who have a lot of cash and don’t care too much about getting the best prices; such people probably are the most valuable customers. Other people, who don’t love their work and view it as a necessary evil, find that they need to work much harder than they should because of the crazy overachievers who are running up the prices on everything.
Could producerism also be a major issue in East Asia?
This article is interesting to me because I have this belief that weight loss is basically about eating less (and exercising more). And some extremely high percentage of everything said about dieting, etc. beyond that is just irrational noise. And that the diets that work don’t work because of the reasons their proponents say they work, but only because they end up restricting calories as a byproduct.
This line is the funniest to me. This is why I think low carb diets work: Because if you eliminate the primary source of calories in a person’s diet (carbs, which can be 50%+ of many people’s diets), they will eat significant fewer calories overall restricting themselves to only protein and fat. But people have, instead, made up all sorts of fancy, science-y sounding reasons why carbs were evil.
What do the following have in common?
“Trying to lose weight? Just eat less and exercise more!”
“Trying to get more done? Just stop wasting time!”
“Feeling depressed? Just cheer up!”
“Want not to get pregnant? Just don’t have sex!”
Answer: they would all be quite successful if followed, but they are all difficult enough to follow that people who actually care about results will do better to set different goals that take more account of how human decision-making actually works.
If you eat less and exercise more then, indeed, you will lose weight. (I do not know how reliably you will lose weight by losing fat, which of course is usually the actual goal.) But you don’t exactly get to choose to eat less and exercise more; you get to choose to aim to do those things, but willpower is limited and akrasia is real and aiming to eat less and exercise more may be markedly less effective than (e.g.) aiming to reduce consumption of carbohydrates, or aiming to keep a careful tally of everything you eat, or aiming to stop eating things with sugar in, or whatever.
People with plenty of willpower, or unusually fast metabolism, or brains less-than-averagely inclined to make them eat everything tasty they see around them, may have excellent success by just aiming to eat less and exercise more. In the same way, for many people “just cheer up!” may be sufficient to avoid depression; for many people “just don’t have sex if you don’t want to have babies!” may be sufficient to avoid unwanted pregnancy; etc.
But there are plenty of people for whom that doesn’t work so well, and this is true even among very smart people, very successful people, or almost any category of people not gerrymandered to force it to be false. And for those people, the “just do X” advice simply will not work, and sneering at them because they are casting around for methods other than “just do X” is simply a sign of callousness or incomprehension.
This is true. But, unfortunately, those people often come to the conclusion that “X never works” and they defend this position with great tenacity.
The typical mind (or body) fallacy is pervasive :-/
You focused on akrasia, and obviously this is a component.
My guess was: they’re all wildly underdetermined. “Cheer up” isn’t a primitive op. “Don’t have sex” or “eat less and exercise more” sound like they might be primitive ops, but can be cashed out in many different ways. “Eat less and exercise more, without excessively disrupting your career/social life/general health/etc” is not a primitive op at all, and may require many non-obvious steps.
The specifics are very different in each of these examples, as Wes_W noted here.
Is the only one that’s potentially actively bad advise, for the same reason “pee a lot, don’t drink any water, and stay away from heavy food like vegetables” is bad advice.
Is at lest not actively bad advise, but “cheer up” isn’t a primitive.
Is actually good advise and a little investigation reveals that the people saying otherwise are placing nearly as much (or even more) value on “you having sex” as on “you not getting pregnant”.
Surprisingly, some people don’t even believe this. I know a sizable group of Paleo proponents and some fruitarians who say that you can eat whatever quantity of food x and this will not have negative effects on your weight. There are also people who think this advice won’t work because they’ve tried and it didn’t had any effect (but actually they weren’t aware that they were not eating less).
I am between those people (those who fail to follow the simple advice, not the very smart or very successful). But I recognize that my failure is in proceeding to eat less from the simple cognition of “I must eat less”. That doesn’t make “eat less” useless, it makes it incomplete. Indeed, when I found a system that allowed me to eat less using whatever low willpower that I have, I indeed started to lose weight.
“But actually they weren’t aware that they were not eating less.”
This is why I advocate the method of using a Beeminder weight goal (or some equivalent), weigh yourself every day, and don’t eat for the rest of the day when you are above the center line. When you are below it, you can eat whatever you want for the rest of the day.
This doesn’t take very much willpower because there is a very bright line, you don’t have to carefully control what or how much you eat, it’s either you eat today or you don’t.
Do scales actually work with enough accuracy that doing this even makes any sense?
It doesn’t matter. Fluctuations with scales and with water retention may mean that you may end up fasting an extra day here and there for random reasons, but you will also end up eating on extra days for the same reason. It ends up the same on average.
That has some issues. First, changes in water retention jitter your daily weight by a pound or two. Second, you assume good tolerance for intermittent fasting. If you weight yourself in the morning, decide you’re not going to eat for the whole day, and then suffer a major sugar crash in the afternoon, that will be problematic.
Yes, it won’t work for people who can’t manage a day without eating at least from time to time, although you can also try slowing down the rate of change.
As I said in another comment, changes in water retention (and scale flucuations etc.) don’t really matter because it will come out the same on average.
Volatility matters. Imagine that one day the temperature in your house was set to 50F (+10C) and the next day—to 90F (+32C). On the average it comes out to 70F (+20C), so it’s fine, right?
Creating a calorie deficit will cause weight loss. Just like abstinence will prevent pregnancy.
Depression is not like this. You can’t necessarily will yourself free of depression. You could will yourself to “act happy” until the grave, but it wouldn’t necessarily change your neurons.
The point is that you can’t necessarily will yourself to eat less or not have sex or stop wasting time, either. It looks as if you can, but appearances can be misleading.
I agree that depression is a more extreme case, though; a depressed person may be unable to “cheer up” on any single occasion, whereas I think most people can resist temptations to food, sex and timewasting once if they really need to.
(Also, depression isn’t just persistent unhappiness, but that’s usually part of it and is what would be fixed by Just Cheering Up, were that possible for the sufferer.)
Do you think you can necessarily will yourself to drink less (alcohol)?
The debate about “personal responsibility” vs “can’t help him/herself” is very old.
No. More precisely: I’m pretty sure I can. I’m pretty sure most alcoholics can’t. But they may be able to will themselves to drink no alcohol at all, just as it may be easier to follow a diet like “nothing containing sugar” than one like “no more than 2000 kcal/day”.)
I suggest that we should actually care less about whether in some abstract sense we can do these things (exactly what we “can” do will probably depend strongly on the definition of “can”), and more about whether we will. And on that, I think the empirical evidence is pretty good: for many people, just deciding to eat less will probably not result in actually losing weight and keeping it off.
These are somewhat different concerns in the sense that “can” is not sufficient for “will”, but it is necessary for “will”. Since I cannot fly by flapping my arms, the question of whether I will fly this way doesn’t have much meaning.
I suggest that, instead, we stop pretending that there are solutions suitable to absolutely everyone. People are different and are sufficiently different to require quite different approaches. If we take weight as the example, some people (commonly called “that bitch/bastard” :-D) can eat whatever they want and maintain weight; some people can control their weight purely by willing themselves to eat less; some people can control their weight by setting up a system of tricks and misdirections for themselves which works; some people cannot control their weight by themselves and need external help; some people can’t do it even with external help and need something like a gastric bypass; and some people have a sufficiently screwed up metabolism so that pretty much nothing will make them slim.
There is no general solution—it depends.
I was under the impression that that was pretty much exactly what I’d been saying :-).
Yes, the effect of diets on weight-loss is roughly mediated by their effect on caloric intake and expenditure. But this does not mean that “eat fewer calories and expend more” is good advice. If you doubt this, note that the effect of diets on weight-loss is also mediated by their effects on mass, but naively basing our advice on conservation of mass causes us to generate terrible advice like “pee a lot, don’t drink any water, and stay away from heavy food like vegetables”.
The causal graph to think about is “advice → behavior → caloric balance → long-term weight loss”, where only the advice node is modifiable when we’re deciding what advice to give. Behavior is a function of advice, not a modifiable variable. Empirically, the advice “eat fewer calories” doesn’t do a good job of making people eat fewer calories. Empirically, advice like “eat more protein and vegetables” or “drink olive oil between meals” does do a good job of making people eat fewer calories. The fact that low-carb diets “only” work by reducing caloric intake does not mean that low-carb diets aren’t valuable.
I think it has a net negative effect on the global dieting discussion that it contains these superfluous steps to the actual causes of weight loss.
Having a rational discussion about satiation is one thing. It is a long way from the woo that has been involved in getting people to believe carbs are magically evil.
I remember first digging into the Atkins diet. I thought, “No. This is dumb. It’s just calorie restriction. Why are they pretending it’s more than that?” But I shut my mouth for a while because I didn’t understand the science and Atkins and other low carb variants seemed so popular.
“Eat less and exercise more” is the best dieting advice (Or perhaps even better, “Create a reasonable calorie deficit over time”.) It may be difficult to follow, but it’s clear. It allows people to rationally attack the problem of “how” to accomplish weight loss. Everything else is just muddying the waters.
After changing my mind dozens of time on this topic, exactly what you say is the only stable conclusion I reached. Effective weight loss is effective calorie reduction, whatever you do to reach that is whatever works for you.
Seconded. I’ve noticed that the diets I have tried successfully (which have all been variants on “consume food during a limited number of sittings per day”) have worked by allowing me to use noticeably less willpower than it would normally take for me to use to limit my calorie intake by that much per day.
Hunger is the big diet killer. It’s very hard to maintain a diet if you walk around hungry all day and eat meals that fail to sate your appetite. Losing weight is a lot easier once you find a way to manage your hunger. One of the strengths of the low-carb diet is that fat and protein are a lot better than carbs at curbing hunger.
Yeah. But that’s not really the point. The reason low carb diets lead to weight loss is because they restrict calories. I’m aware of many dieting tricks that can assist, but a calorie deficit must be created in order for weight to be lost.
This may seem self evident, but there is still debate about it. Carbs are not magically evil, they are just a macro nutrient that happens to be a large share of the calories in a typical Western diet. That’s it. No magic.
If you don’t eat after 6pm, never eat dessert or fast food, eat a larger breakfast, have a salad or X raw vegetables everyday, drink X water everyday—these can all help you lose weight. But there is nothing magical, or even scientific about any of these tactics. It’s all stuff we’ve known for 100 years.
Likewise, if you walk 2 miles a day everyday for a year, you’ll burn X calories that will lead to X weight loss. It’s just math.
My point was to specifically disparage diets like the Atkins Diet. It does nothing apart from restricting calories, yet libraries have been written about the magic of how and why it works. It’s all just noise aimed at selling books, etc. to people who are looking for help.
No one in this thread is disputing that you need a calorie deficit to lose weight. My contention is that this is merely the beginning, not the end. Let’s refer to the following passage from the linked article:
A diet should be realistic for free-living individuals. An obese person who wants to lose 50+ lb. could expect to be at it for the better part of a year. A diet that leaves you hungry all day is doomed to fail: it’s unrealistic to expect pure willpower to last that long. That is the point of my post about hunger control. Disregarding it or dismissing it as a mere trick is to ignore that a very important part of dieting is making sure the dieter sticks to the diet.
Quite the contrary. The Atkins Diet is not just about losing the weight. It also includes a plan to keep it off. Maintaining weight loss is generally harder than losing the weight in the first place. Yo-yo dieting) is a very real problem. The problem with naive calorie restriction is that it doesn’t instill good eating habits that can be maintained once the weight-loss period ends. The Atkins Diet addresses this and is designed to ease one into eating habits that will maintain the weight loss.
It isn’t unrealistic to create a reasonable calorie deficit for awhile...and I have no idea what a “free-living” individual is. It may be difficult to lose weight, but it’s like anything else that is difficult. It requires focused effort over time. Habits can be hard to change. There are plenty of tricks and hacks to help. Avoiding carbs is a good one becuase it will autmotically eliminate 25-60% of an individual’s daily calorie consumption. That’s all it will do. You could avoid fat, too. Same effect. Fat and carbs = calories. No magic.
Yeah, but strawman. Dieting involves some hunger. It’s not going to kill you. It’s just part of the adjustment to a more healthy level of consumption.
Naive calorie restriction is just regular calorie restriction with a negative name. Good eating habits entail calorie control. That’s not naive. It’s basic.
Weight loss is generally really simple. We should be grateful that this is so. Every discussion I’ve seen on LW makes dieting much more complicated than it need be. It’s very hard for many people, but that doesn’t mean it’s complicated.
By “naive” I just mean calorie restriction without any other consideration. For example, a diet where one replaces a large pizza, a 2-Liter bottle of Coca-Cola, and a slice of chocolate cake with half a large pizza,1 Liter of Coca-Cola, and a smaller slice of chocolate cake is what I’d consider naive calorie restriction. I don’t know that anyone would seriously argue that the restricted version even remotely resembles good eating habits.
Lest you accuse me of straw-manning, let it be noted that many obese people subsist on a diet consisting of fast food and junk food. In fact, malnutrition is a very real problem among the obese. That’s right: you can eat 5k+ Calories a day and still exhibit signs of malnutrition if all you eat is junk. When I speak of instilling good eating habits, I have in mind people who exhibit severe ignorance or misconception of basic nutrition.
A low-carb diet is not just a matter of eating what you normally eat, minus the carbohydrates. That’s going to end about as well as a vegetarian diet where you simply cut out the meat from your normal diet. You run into a micronutrient deficiency that can end up causing problems if the new diet is sustained for several months.
It’s an empirical fact that some foods are more filling than others and keep you feeling full for a longer period of time, even if the number of calories consumed is the same. That’s why people care about the glycemic index. I have tried losing weight several times over the last seven years or so. There are diets where you feel satisfied most of the time, then there are diets where you finish a meal feeling as hungry as you did when you started. The psychological difference between the two is quite profound and hardly warrants the charge of “strawman”.
Well, of course. I never said or implied calories were the whole ball game. You’re conflating weight loss and nutrition throughout.
No, but you’d be hard pressed to make up those calories by eating proteins. That is quite the point.
I’ve mentioned satiation as a real issue that ought to be addressed by any rational diet plan.
But, again, it isn’t the aim that a diet should involve no hunger when compared to your current meal plan. That is just plain silly and irrational.
Losing weight is like any other pursuit—it requires the expenditure of resources: Will power, focus, effort, energy, discipline. It may diminish your capacity to pursue other things for a time. It doesn’t mean you have to be bedridden or incapacitated. Again, any other pursuit is like this: Working long days on a big project at work, training for a taxing athletic event, studying for difficult classes and exams, etc. Dieting is a significant project to take on.
There seems to be this idea floating around that you can diet, lose lots of weight, and not have it consume some bandwidth in your life. BS. There are some great, rational hacks available, but it takes some sustained work to lose weight. There isn’t anyway around that.
Short term, the body is resilient enough that you can go on a crash diet to quickly drop a few pounds without worrying about nutrition. On the other hand, nutrition is an essential consideration in any weight-loss plan that’s going to last many months. That’s why I associate the two.
Certain approaches purport to do this very thing by means of suppressing the appetite so that one naturally eats less. Consider, for example, the Shangri-La diet.
I will grant that if one wants to lose 2+ pounds a week over a long period of time, then the pangs of hunger are unavoidable.
Agreed. This is especially true if there’s a psychological component to the initial weight gain. For example, stress eaters will have to either avoid stress or figure out a new coping mechanism if they want to lose weight and maintain the weight loss.
Fat has calories too. If someone eats less carbs, why do you believe they will not eat more fat? I could swap carbs and fat in your statement and get something with the same logic.