ArchiveSequencesAbout
QuestionsEventsShortformAlignment ForumAF Comments
HomeFeaturedAllTagsRecent Comments

Affective Death Spirals

Eliezer YudkowskyDec 2, 2007, 4:44 PM
114 points
46 comments2 min readLW linkArchive
Affect HeuristicRationalityAffective Death SpiralEmotions
Post permalinkLink without commentsLink without top nav barsLink without comments or top nav bars

Many, many, many are the flaws in human reasoning which lead us to overestimate how well our beloved theory explains the facts. The phlogiston theory of chemistry could explain just about anything, so long as it didn’t have to predict it in advance. And the more phenomena you use your favored theory to explain, the truer your favored theory seems—has it not been confirmed by these many observations? As the theory seems truer, you will be more likely to question evidence that conflicts with it. As the favored theory seems more general, you will seek to use it in more explanations.

If you know anyone who believes that Belgium secretly controls the US banking system, or that they can use an invisible blue spirit force to detect available parking spaces, that’s probably how they got started.

(Just keep an eye out, and you’ll observe much that seems to confirm this theory . . .)

This positive feedback cycle of credulity and confirmation is indeed fearsome, and responsible for much error, both in science and in everyday life.

But it’s nothing compared to the death spiral that begins with a charge of positive affect—a thought that feels really good.

A new political system that can save the world. A great leader, strong and noble and wise. An amazing tonic that can cure upset stomachs and cancer.

Heck, why not go for all three? A great cause needs a great leader. A great leader should be able to brew up a magical tonic or two.

The halo effect is that any perceived positive characteristic (such as attractiveness or strength) increases perception of any other positive characteristic (such as intelligence or courage). Even when it makes no sense, or less than no sense.

Positive characteristics enhance perception of every other positive characteristic? That sounds a lot like how a fissioning uranium atom sends out neutrons that fission other uranium atoms.

Weak positive affect is subcritical; it doesn’t spiral out of control. An attractive person seems more honest, which, perhaps, makes them seem more attractive; but the effective neutron multiplication factor is less than one. Metaphorically speaking. The resonance confuses things a little, but then dies out.

With intense positive affect attached to the Great Thingy, the resonance touches everywhere. A believing Communist sees the wisdom of Marx in every hamburger bought at McDonald’s; in every promotion they’re denied that would have gone to them in a true worker’s paradise; in every election that doesn’t go to their taste; in every newspaper article “slanted in the wrong direction.” Every time they use the Great Idea to interpret another event, the Great Idea is confirmed all the more. It feels better—positive reinforcement—and of course, when something feels good, that, alas, makes us want to believe it all the more.

When the Great Thingy feels good enough to make you seek out new opportunities to feel even better about the Great Thingy, applying it to interpret new events every day, the resonance of positive affect is like a chamber full of mousetraps loaded with ping-pong balls.

You could call it a “happy attractor,” “overly positive feedback,” a “praise locked loop,” or “funpaper.” Personally I prefer the term “affective death spiral.”

Coming up next: How to resist an affective death spiral.1

1Hint: It’s not by refusing to ever admire anything again, nor by keeping the things you admire in safe little restricted magisteria.

  • Why Our Kind Can’t Cooperate by Eliezer Yudkowsky (Mar 20, 2009, 8:37 AM; 295 points)
  • Poli­tics is way too meta by Rob Bensinger (Mar 17, 2021, 7:04 AM; 291 points)
  • Rais­ing the San­ity Waterline by Eliezer Yudkowsky (Mar 12, 2009, 4:28 AM; 241 points)
  • Levels of Action by alyssavance (Apr 14, 2011, 12:18 AM; 187 points)
  • Cri­sis of Faith by Eliezer Yudkowsky (Oct 10, 2008, 10:08 PM; 180 points)
  • Illeg­ible im­pact is still impact by Gordon Seidoh Worley (EA Forum; Feb 13, 2020, 9:45 PM; 135 points)
  • Don’t Re­vere The Bearer Of Good Info by CarlShulman (Mar 21, 2009, 11:22 PM; 126 points)
  • Ein­stein’s Superpowers by Eliezer Yudkowsky (May 30, 2008, 6:40 AM; 120 points)
  • Guardians of Ayn Rand by Eliezer Yudkowsky (Dec 18, 2007, 6:24 AM; 120 points)
  • Bayesi­ans vs. Barbarians by Eliezer Yudkowsky (Apr 14, 2009, 11:45 PM; 106 points)
  • How to Save the World by Louie (Dec 1, 2010, 5:17 PM; 103 points)
  • The Trou­ble With “Good” by Scott Alexander (Apr 17, 2009, 2:07 AM; 100 points)
  • Go Forth and Create the Art! by Eliezer Yudkowsky (Apr 23, 2009, 1:37 AM; 90 points)
  • Can Hu­man­ism Match Reli­gion’s Out­put? by Eliezer Yudkowsky (Mar 27, 2009, 11:32 AM; 83 points)
  • The Mo­ral Void by Eliezer Yudkowsky (Jun 30, 2008, 8:52 AM; 79 points)
  • Fake Utility Functions by Eliezer Yudkowsky (Dec 6, 2007, 4:55 PM; 72 points)
  • Two Truths and a Lie by Psychohistorian (Dec 23, 2009, 6:34 AM; 70 points)
  • My Child­hood Death Spiral by Eliezer Yudkowsky (Sep 15, 2008, 3:42 AM; 70 points)
  • Science Doesn’t Trust Your Rationality by Eliezer Yudkowsky (May 14, 2008, 2:13 AM; 69 points)
  • Do Scien­tists Already Know This Stuff? by Eliezer Yudkowsky (May 17, 2008, 2:25 AM; 66 points)
  • Chang­ing Your Metaethics by Eliezer Yudkowsky (Jul 27, 2008, 12:36 PM; 64 points)
  • Poli­tics is far too meta by RobBensinger (EA Forum; Mar 17, 2021, 11:57 PM; 58 points)
  • A re­view of Prin­cipia Qualia by jessicata (Jul 12, 2023, 6:38 PM; 56 points)
  • Truth: It’s Not That Great by ChrisHallquist (May 4, 2014, 10:07 PM; 50 points)
  • A Prodigy of Refutation by Eliezer Yudkowsky (Sep 18, 2008, 1:57 AM; 49 points)
  • Se­duced by Imagination by Eliezer Yudkowsky (Jan 16, 2009, 3:10 AM; 48 points)
  • Un­der­stand­ing vipas­sana meditation by [deleted] (Oct 3, 2010, 6:12 PM; 48 points)
  • Fake Fake Utility Functions by Eliezer Yudkowsky (Dec 6, 2007, 6:30 AM; 42 points)
  • How can I get help be­com­ing a bet­ter ra­tio­nal­ist? by TeaTieAndHat (Jul 13, 2023, 1:41 PM; 40 points)
  • A Suggested Read­ing Order for Less Wrong [2011] by jimrandomh (Jul 8, 2011, 1:40 AM; 38 points)
  • Poli­tics and Awful Art by Eliezer Yudkowsky (Dec 20, 2007, 3:46 AM; 37 points)
  • De­grees of Rad­i­cal Honesty by MBlume (Mar 31, 2009, 8:36 PM; 34 points)
  • In Praise of Max­i­miz­ing – With Some Caveats by David Althaus (Mar 15, 2015, 7:40 PM; 32 points)
  • CarlShulman's comment on Med­i­ta­tion, in­sight, and ra­tio­nal­ity. (Part 1 of 3) by DavidM (Apr 29, 2011, 10:42 PM; 30 points)
  • Some of the best ra­tio­nal­ity essays by Jack R (Oct 19, 2021, 10:57 PM; 29 points)
  • The Im­por­tance of Self-Doubt by multifoliaterose (Aug 19, 2010, 10:47 PM; 28 points)
  • Head­ing Toward Morality by Eliezer Yudkowsky (Jun 20, 2008, 8:08 AM; 27 points)
  • A Pre­ma­ture Word on AI by Eliezer Yudkowsky (May 31, 2008, 5:48 PM; 27 points)
  • Failure By Affec­tive Analogy by Eliezer Yudkowsky (Nov 18, 2008, 7:14 AM; 27 points)
  • Ex­pect­ing Beauty by Eliezer Yudkowsky (Jan 12, 2008, 3:00 AM; 27 points)
  • The Uses of Fun (The­ory) by Eliezer Yudkowsky (Jan 2, 2009, 8:30 PM; 23 points)
  • Be­ing wrong in ethics by Stuart_Armstrong (Mar 29, 2019, 11:28 AM; 22 points)
  • Par­alleliz­ing Ra­tion­al­ity: How Should Ra­tion­al­ists Think in Groups? by almkglor (Dec 17, 2012, 4:08 AM; 21 points)
  • simplicio's comment on Un­known knowns: Why did you choose to be monog­a­mous? by WrongBot (Jun 27, 2010, 10:50 PM; 21 points)
  • That Cri­sis thing seems pretty useful by Eliezer Yudkowsky (Apr 10, 2009, 5:10 PM; 18 points)
  • Get­ting Nearer by Eliezer Yudkowsky (Jan 17, 2009, 9:28 AM; 16 points)
  • Con­cept Safety: World-mod­els as tools by Kaj_Sotala (May 9, 2015, 12:07 PM; 14 points)
  • LucasSloan's comment on Ra­tion­al­ity Quotes: July 2010 by komponisto (Jul 2, 2010, 4:31 AM; 13 points)
  • Nornagest's comment on Ra­tion­al­ity for Other People by atucker (Feb 11, 2011, 8:34 AM; 12 points)
  • Vive-ut-Vivas's comment on What I’ve learned from Less Wrong by Louie (Nov 20, 2010, 1:34 PM; 12 points)
  • Cross-Cul­tural maps and Asch’s Con­for­mity Experiment by Sable (Mar 9, 2016, 12:40 AM; 10 points)
  • [deleted]'s comment on Bayesian Bud­dhism: a path to op­ti­mal enlightenment by David_Allen (Oct 7, 2010, 6:50 PM; 10 points)
  • hairyfigment's comment on Ra­tion­al­ity Quotes March 2014 by MalcolmOcean (Mar 7, 2014, 6:38 PM; 10 points)
  • Ra­tion­al­ity Read­ing Group: Part J: Death Spirals by Gram_Stone (Sep 24, 2015, 2:31 AM; 7 points)
  • Unnamed's comment on Aver­ag­ing value sys­tems is worse than choos­ing one by PhilGoetz (Apr 30, 2010, 4:24 AM; 6 points)
  • wedrifid's comment on The Power of Reinforcement by lukeprog (Jun 21, 2012, 3:54 PM; 6 points)
  • endoself's comment on Sub­jec­tive Rel­a­tivity, Time Dila­tion and Divergence by jacob_cannell (Feb 13, 2011, 5:57 AM; 6 points)
  • [SEQ RERUN] Affec­tive Death Spirals by MinibearRex (Nov 14, 2011, 5:34 AM; 6 points)
  • andreas's comment on Open Thread Septem­ber, Part 3 by LucasSloan (Sep 28, 2010, 11:33 PM; 6 points)
  • Lumpyproletariat's comment on Read The Sequences by Quadratic Reciprocity (EA Forum; Dec 24, 2022, 11:31 PM; 5 points)
  • SilasBarta's comment on Ask LessWrong: Hu­man cog­ni­tive en­hance­ment now? by taw (Jun 22, 2009, 5:02 PM; 5 points)
  • WrongBot's comment on Open Thread: July 2010 by komponisto (Jul 2, 2010, 4:49 AM; 5 points)
  • cupholder's comment on Open Thread June 2010, Part 3 by Kevin (Jun 14, 2010, 9:53 PM; 5 points)
  • satt's comment on The Stable State is Broken by Bakkot (Mar 13, 2012, 10:57 PM; 5 points)
  • [deleted]'s comment on Why Our Kind Can’t Cooperate by Eliezer Yudkowsky (Dec 16, 2012, 6:02 PM; 4 points)
  • pedanterrific's comment on Prac­tic­ing what you preach by TwistingFingers (Oct 23, 2011, 11:23 PM; 4 points)
  • Viliam_Bur's comment on Open Thread, May 16-31, 2012 by OpenThreadGuy (May 16, 2012, 1:33 PM; 4 points)
  • jimrandomh's comment on Ra­tion­al­ist Role in the In­for­ma­tion Age by byrnema (Apr 30, 2009, 10:45 PM; 4 points)
  • Richard_Kennaway's comment on Of the Qran and its stylis­tic re­sources: de­con­struct­ing the per­sua­sive­ness Draft by Raw_Power (Oct 13, 2010, 6:22 AM; 3 points)
  • Sharmake's comment on The Wages of North-At­lantic Bias by Sach Wry (EA Forum; Aug 20, 2022, 4:42 PM; 2 points)
  • Morendil's comment on Diseased dis­ci­plines: the strange case of the in­verted chart by Morendil (Feb 7, 2012, 7:37 AM; 2 points)
  • arundelo's comment on Find your­self a Wor­thy Op­po­nent: a Chavruta by Raw_Power (Jul 7, 2011, 2:16 AM; 2 points)
  • dclayh's comment on Man-with-a-ham­mer syndrome by Shalmanese (Dec 14, 2009, 7:29 PM; 2 points)
  • Vladimir_Nesov's comment on Where’s Your Sense of Mys­tery? by Scott Alexander (Apr 26, 2009, 2:50 PM; 2 points)
  • WrongBot's comment on A Challenge for LessWrong by simplicio (Jul 1, 2010, 7:28 PM; 2 points)
  • Dojan's comment on Ar­gu­ment Screens Off Authority by Eliezer Yudkowsky (Oct 18, 2011, 11:12 AM; 1 point)
  • lessdazed's comment on Be­ware of Other-Optimizing by Eliezer Yudkowsky (Aug 7, 2011, 1:31 PM; 1 point)
  • Prac­tic­ing what you preach by TwistingFingers (Oct 23, 2011, 6:12 PM; 1 point)
  • What does al­ign­ing AI to an ide­ol­ogy mean for true al­ign­ment? by StanislavKrym (Mar 30, 2025, 3:12 PM; 1 point)
  • Desrtopa's comment on [Link] Why don’t peo­ple like mar­kets? by GLaDOS (Jun 24, 2012, 4:51 AM; 1 point)
  • Roko's comment on Mo­ral Er­ror and Mo­ral Disagreement by Eliezer Yudkowsky (Aug 13, 2008, 10:25 AM; 0 points)
  • SoullessAutomaton's comment on ESR’s com­ments on some EY:OB/​LW posts by Eliezer Yudkowsky (Jun 20, 2009, 12:44 AM; 0 points)
  • Jay_Schweikert's comment on Only say ‘ra­tio­nal’ when you can’t elimi­nate the word by Eliezer Yudkowsky (May 31, 2012, 2:32 PM; 0 points)
  • Minds_Eye's comment on First(?) Ra­tion­al­ist elected to state government by Eneasz (Nov 13, 2014, 7:58 PM; 0 points)
  • [deleted]'s comment on Ra­tional Repentance by Mass_Driver (Jan 16, 2011, 6:44 AM; 0 points)
  • passive_fist's comment on Less Wrong lacks direction by casebash (May 25, 2015, 10:25 PM; -2 points)
  • If rea­son told you to jump off a cliff, would you do it? by Shalmanese (Dec 21, 2009, 3:54 AM; -14 points)
Eliezer YudkowskyDec 2, 2007, 4:44 PM
114 points
46 comments2 min readLW linkArchive
Affect HeuristicRationalityAffective Death SpiralEmotions
Post permalinkLink without commentsLink without top nav barsLink without comments or top nav bars
Part of the sequence:Death SpiralsPrevious: Su­per­hero BiasNext: Re­sist the Happy Death Spiral
  • JayDuggerDec 2, 2007, 5:45 PM
    6 points
    0

    Please define “magisteria” in the follow-up post. I tried three dictionaries without finding its definition.

  • Ron_HardinDec 2, 2007, 5:52 PM
    0 points

    Phlogiston was the cause of fire. It’s a reification error, is all. Like ``power″ in political discourse, which is supposed to be a thing you can acquire, or lose, or contest. Whole analyses depend on it.

  • Video2Dec 2, 2007, 5:55 PM
    6 points
    0

    magisteria (plural) – Realms of belief, for example, the realm of religious belief taken together with the realm of scientific belief. The absence of legitimate conflict between these realms was termed non-overlapping magisteria by Steven J. Gould.

  • Recovering_irrationalistDec 2, 2007, 7:06 PM
    5 points

    (Just keep an eye out, and you’ll observe much that seems to confirm this theory...)

    I hope everyone was paying attention to that bit :-)

    Coming tomorrow: How to resist an affective death spiral.

    Please include judging how much to resist what may partly be a due to the spiral, so as not to overcompensate. Sometimes a “Great Thingy” is genuinely great.

  • sam_kayleyDec 2, 2007, 7:23 PM
    2 points

    Affective death spiral sounds like something to do with depression, praise locked loop may give a more accurate impression of the idea.

  • Cyan2Dec 2, 2007, 8:24 PM
    9 points

    “Affective death spiral” sounds like the process by which I became a militant evangelical Bayesian. But I got better: now I’m only a fundamentalist Bayesian, and my faith does not require me to witness the Bayesian Gospel to those who aren’t interested.

  • Joshua_FoxDec 2, 2007, 8:45 PM
    0 points

    I’ve always thought it was silly to call great football players “heroes.” But in fact, people can be heroes (in the sense of role models) in one area of life and not in others. You can admire and be inspired by a role model’s athleticism, intellectual honesty, kindness, etc. even though these are not usually found all together in one person.

  • Cihan_BaranDec 2, 2007, 11:18 PM
    3 points

    This reminds me of a Karl Popper excerpt that I read several years ago. Popper levels similar charges against Marxism and Freudianism:

    http://​​www.stephenjaygould.org/​​ctrl/​​popper_falsification.html

    Thanks

  • Ron_HardinDec 2, 2007, 11:34 PM
    0 points

    Death spiral comes from airplanes and pilot disorientation leading to corrective action making a descending turn progressively worse. Without the disorientation, it doesn’t happen.

    Flying blind without instruments leads to disorientation very fast, if you’re doing the flying. If you’re just a passenger, you reorient from what the pilot does; but it’s fatal if the pilot does that, without instruments to reorient himself from.

    Disorientation is the key to take away.

  • Eliezer YudkowskyDec 3, 2007, 12:29 AM
    1 point

    What would you think of “Happy Death Spiral”?

  • Caledonian2Dec 3, 2007, 1:41 AM
    9 points

    I would probably avoid taunting it.

  • Tom_McCabe2Dec 3, 2007, 2:39 AM
    1 point

    “Coming tomorrow: How to resist an affective death spiral. (Hint: It’s not by refusing to ever admire anything again, nor by keeping the things you admire in safe little restricted magisteria.)”

    Hmmm… maybe you could consider scenarios in which the Great Thingy gets you killed or seriously injured? Or by extrapolating it out until you reach predictions that are obviously absurd (eg, my boss is part of a government anti-Marxist conspiracy)?

    • bigjeff5Feb 11, 2011, 9:39 PM
      3 points
      0
      Parent

      Or by extrapolating it out until you reach predictions that are obviously absurd (eg, my boss is part of a government anti-Marxist conspiracy)?

      That works just fine until your boss actually is part of a government anti-Marxist conspiracy...

  • Stuart_ArmstrongDec 3, 2007, 2:45 PM
    1 point

    Coming tomorrow: How to resist an affective death spiral.

    Listening to some really good satire or mockery of the thing admired would help—it would dampen your emotional commitment to it, while leaving your rational commitment intact.

    Trying to picture a world—sci-fi if needed—where your pet theory is not true may help, as long as you can create a reasonable functioning world, not a caricature...

    But I’m feeling it’s something far more cunning coming along...

  • Stuart_ArmstrongDec 3, 2007, 3:32 PM
    24 points
    0

    On a more serious note: cut up your Great Thingy into smaller independent ideas, and treat them as independent.

    For instance a marxist would cut up Marx’s Great Thingy into a theory of value of labour, a theory of the political relations between classes, a theory of wages, a theory on the ultimate political state of mankind. Then each of them should be assessed independently, and the truth or falsity of one should not halo on the others. If we can do that, we should be safe from the spiral, as each theory is too narrow to start a spiral on its own.

    Same thing for every other Great Thingy out there.

    • Re­sist the Happy Death Spiral by Eliezer Yudkowsky (Dec 4, 2007, 1:15 AM; 94 points)
    • As­so­ci­ated vs Relevant by abramdemski (Jun 15, 2015, 1:12 AM; 19 points)
    • satt's comment on [Link] Detachment by John_Maxwell (Feb 11, 2013, 2:47 AM; 8 points)
    • Viliam's comment on Open Thread Feb 29 - March 6, 2016 by Elo (Mar 4, 2016, 3:35 PM; 6 points)
    • satt's comment on Seek­ing links for the best ar­gu­ments for eco­nomic libertarianism by Bart119 (May 4, 2012, 11:38 PM; 1 point)
  • gDec 3, 2007, 10:25 PM
    1 point

    But some Great Thingies might not be readily splittable. For instance, consider the whole edifice of theoretical physics, which is a pretty good candidate for a genuinely great Thingy (though not of quite the same type as most of the Great Thingies under discussion here). Each bit makes most sense in the context of the whole structure, and you can only appreciate why a given piece of evidence is evidence for one bit if you have all the other bits available to do the calculations with.

    Of course, all this could just indicate that the whole edifice of theoretical physics (if taken as anything more than a black box for predicting observations) is a self-referential self-supporting delusion, and in a manner of speaking it’s not unlikely that that’s so—i.e., the next major advance in theoretical physics could well overturn all the fundamentals while leaving the empirical consequences almost exactly the same. Be that as it may, much of the value of theoretical physics comes from the fact that it is a Great Thingy and not just a collection of Little Thingies, and it seems like it would be a shame to adopt a mode of thinking that prevents us appreciating it as such.

  • AdirianDec 3, 2007, 11:25 PM
    3 points

    Notably, regarding theoretical physics, there are at least nine models for modern theoretical physics, all of which can perfectly explain the empirical observations, and all of which are completely and totally contradictory to one another. (Okay, almost all of which. There are a few compatibilities scattered amongst them. Neorealism can work fine with the multiverse model, and there are a small handful of models which are derived from Bohr’s interpretations and are semicompatible with one another.)

  • gDec 4, 2007, 12:09 AM
    4 points

    I think “completely and totally contradictory” is putting it too strongly, since they do in fact all agree about all observations we have ever been able to make or ever anticipate being able to make. Extreme verificationists would argue that the bits they disagree about are meaningless :-).

  • AdirianDec 4, 2007, 12:16 AM
    1 point

    They agree about observations—but we already have the observations, so that doesn’t mean much. Any theory worth thinking about isn’t going to disagree about those observations, which, after all, they are created to explain. They disagree in every way it is meaningful that they, theories about the reason why, MAY disagree—in the reasons why. And extreme verificationists can go take a leap off a logical cliff when it comes to discussing differences in the reasons why something may be.

  • Nick_TarletonDec 4, 2007, 12:41 AM
    0 points

    “they do in fact all agree about all observations we have ever been able to make or ever anticipate being able to make.”

    Not entirely true.

  • gDec 4, 2007, 12:58 AM
    0 points

    Nick: Oh, sorry, I forgot that there are still people who take the Copenhagen interpretation seriously. Though actually I suspect that they might just decree that observation by a reversible conscious observer doesn’t count. That would hardly be less plausible than the Copenhagen interpretation itself. :-)

    (I also have some doubt as to whether sufficiently faithful reversibility is feasible. It’s not enough for the system to be restored to its prior state as far as macroscopic observations go; the reversal needs to be able to undo decoherence, so to speak. That seems like a very tall order.)

    Adirian: the fact that their agreement-about-observations was predictable in advance doesn’t make it any less an agreement. (And if you’re talking only about the parts of those theories that are “theories about the reasons why”, bracketing all the agreements about what’s observed and how to calculate it, then I don’t think you are entitled to call the things that disagree completely “models for modern theoretical physics”.)

  • AdirianDec 4, 2007, 1:12 AM
    −1 points

    Nick—that proof works fine for any of the neorealist models, in which Everett’s model is, variably, placed. The problem is in interpretation. Remember that there is great disagreement in the Copenhagen models about where, exactly, waveform collapse happens—after all, if one treats the quantum measurement device itself as being in a quantum state, then 100% correlation may be acceptable. (Because the waveform state of the computer wasn’t collapsed until the first and third measurements were examined together.)

    The real problem here is that the Copenhagen models are effectively unscientific, since it is fundamentally impossible to disprove the concept that anything that is unmeasured is in an uncertain/​undefined state. It’s an intellectual parlour trick, and shouldn’t be taken seriously.

    • tlhonmeyJan 14, 2021, 5:02 PM
      1 point
      Parent

      At the same time though, not calculating a value until something actually needs it is exactly the kind of efficiency hack one would really want to implement if they were going to simulate an entire universe...

      So if we are in some level of sub-reality that would make it much more likely that the model is correct, even if there’s no way for us to actually test it...

      So from a practical point of view, it comes down entirely to which model lets us most effectively predict things. Since that’s what we actually care about. I’ll take a collection of “parlour tricks” that can tell me things about the future with high confidence over a provably self-consistent system that is wrong more often.

    • bigjeff5Feb 11, 2011, 10:56 PM
      0 points
      Parent

      Upvoted because, while I don’t know the details of the Copenhagen models, if it is true they rely on “the concept that anything that is unmeasured is in an uncertain/​undefined state”, then until some method of testing this state is devised the theories are effectively pseudo-science.

      The Popper essay, originally mentioned above, describes the problem nicely.

      It doesn’t speak to the truth or untruth of the theory, just to its scientific status, or lack thereof. In a nutshell, if it’s not testable, it’s not scientific, whether it is true or not. This is why it should not be taken too seriously, at least not until it becomes testable.

  • AdirianDec 4, 2007, 1:21 AM
    0 points

    the fact that their agreement-about-observations was predictable in advance doesn’t make it any less an agreement. (And if you’re talking only about the parts of those theories that are “theories about the reasons why”, bracketing all the agreements about what’s observed and how to calculate it, then I don’t think you are entitled to call the things that disagree completely “models for modern theoretical physics”.)

    • It renders that agreement meaningless. If you curve-fit seven points, and come up with a thousand different formulas, the fact that each of these thousand formulas includes each of those seven points produces exactly no additional information. The fact of the matter is that we have discarded every formula which DIDN’T describe those points—that the remaining formulas do describe them tells us absolutely nothing about either the points or the relative value of the formulas with respect to one another.

    At best, out of N formulas, each has a 1/​N chance of being correct. (At worst, none of the formulas is correct.)

  • Cyan2Dec 4, 2007, 4:24 PM
    1 point
    At best, out of N formulas, each has a 1/​N chance of being correct. (At worst, none of the formulas is correct.)

    Technical note: Occam factors (and prior probabilities generally) can cause these chances to deviate from 1/​N.

  • AdirianDec 5, 2007, 1:04 AM
    0 points

    I didn’t mean specifically, I meant on average. My apologies for the poor phrasing. Yes, any individual formula’s odds of being correct can vary. (To deny this would be to deny Bayesian reasoning, and I think I might get mugged here if I tried that.)

  • Robert_HartworthDec 11, 2007, 3:27 AM
    −7 points

    Hey there. I wont ever be returning here again. Time never began. Time will never end. We will never exist. Thank you for your time.

  • AchemanDec 12, 2007, 12:34 PM
    0 points

    Don’t know if that’ll solve matters, just trying. This does seem very Popperian—in a bad way, in that it’s an oversimplified approach to theory-formation. What do you think about Kuhn, who finds this kind of reinforcement in normal, productive science—but still allows a distinction between evidence-based science and entirely circular nonscience? What about the idea that we have ‘rings’ of beliefs, and will sacrifice any number of ‘outer-ring’ theory detail to preserve our core beliefs?

  • AchemanDec 12, 2007, 12:47 PM
    0 points

    Yeah, the ‘help’ was a futile attempt to close the open italics tag. Didn’t work, obviously.

  • gDec 12, 2007, 12:59 PM
    1 point

    Adirian (sorry for not noticing your response sooner), the situation is more like: we have a million data points and several models that all fit those points very precisely and all agree very precisely on how to interpolate between those points—but if we try to use them to extrapolate wildly, into regions where in fact we have no way of getting any real data points, they diverge. It also turns out that within the region where we can actually get data—where the models agree—they don’t agree merely by coincidence, but turn out to be mathematically equivalent to one another.

    You are welcome to describe this situation by saying that the models “completely and totally contradictory”, but I think that would be pretty eccentric.

    (This is of course merely an analogy. I think the reality is even less favourable to your case.)

  • Chip_SmithJan 17, 2008, 2:27 PM
    1 point

    ADS may be observed, most tragically, in the history of “facilitated communication.”

    http://​​www.cqc.state.ny.us/​​hottopics/​​fcwheel.htm

  • Steve3Jul 15, 2008, 8:19 PM
    8 points

    I personally prefer The Law of Fives: “ALL THINGS HAPPEN IN FIVES, OR ARE DIVISIBLE BY OR ARE MULTIPLES OF FIVE, OR ARE SOMEHOW DIRECTLY OR INDIRECTLY APPROPRIATE TO 5.”

    With the corollary: “I find the Law of Fives to be more and more manifest the harder I look.”

    cf. Foucault’s Pendulum; the entire novel.

    • DojanOct 16, 2011, 1:04 AM
      1 point
      Parent

      Ever since reading Hitchhikers Guide to the Galaxy I’ve seen the number 42 pop up at an alarming rate… Though I guess people use that number more than average for that very same reason. (I know I do!)

  • NightflowFeb 3, 2011, 11:36 AM
    0 points

    Good article but why do you only talk about positive “death” spirals It would be just thr same with negative toughts added insecutity iy would be even harder to break out.

  • mat33Oct 6, 2011, 11:06 AM
    0 points

    In the 20-th century, Richard Feinmann did point out that there may be some problem with how we patch our phisics by cutting out the neigbourhoods. Nowdays we are pathing the General Relativity with the dark matter (it wasn’t predicted, really) and even dark matter. It looks like we’ll have to patch some “too fast neitrino in the matter” fenomenon.

    I am not claiming this “patching” business something intristically right and beautifull. Never. We’ll have to propose some new theories. But… before we’ll have some better theory, to patch General Relativity seems just the thing to do. May be—the only thing to do, sorry.

    An average scientist (if there is such a thing) isn’t expected to propose something better, than General Relativity. Not really. So, even as we teach scientists, most of ’em wouldn’t need to remember, that “patching” old theories isn’t the right thing to do, in the long run. As they may do nothing about it. These with Nobel Prize ambition level would be wise to remember it, thought.

    • tlhonmeyMay 13, 2022, 2:55 PM
      1 point
      Parent

      Yeah, “dark matter” really bothers me. Which seems more likely?

      That there are massive quantities of invisible matter in the universe that only interacts via gravitation? And happens to be spread around in about the same density distribution as all the regular matter?

      Or that our estimate for the value of the universal gravitational constant is either off a little bit or not quite as constant as we think?

      The former sounds a little too much like an invisible dragon to me. Which doesn’t make it impossible, but exotic, nigh-undetectable forms of matter just doesn’t seem as plausible as observation error to me.

      • TAGMay 13, 2022, 5:03 PM
        2 points
        Parent

        That there are massive quantities of invisible matter in the universe that only interacts via gravitation? And happens to be spread around in about the same density distribution as all the regular matter?

        Your second sentence is a pretty straightforward consequence of your first.

        • tlhonmeyMay 20, 2022, 9:27 PM
          1 point
          Parent

          That is a reasonable possibility, although if it only interacts with normal matter via gravitation, which is relatively weak, then I’d expect to see its dispersal lag significantly behind, say, a supernova. And that lag would seem likely to result in such events skewing the distribution over time.

          Unless we’re also going to postulate that dark matter has its own energy, chemistry, and physics which resemble those of normal matter so closely that such things happen in both realms at the same time...

          Measurement error and/​or gravity having some kind of propagation properties we haven’t worked out yet still seems like a contender for the explanation, unless they have, indeed found pockets in the universe with differing amounts of excess gravitation that match what one would expect in the wake of fast-moving objects. I haven’t seen any reports about that myself, but I can’t say I’m an insider on the latest research or anything.

          • TAGMay 22, 2022, 4:41 PM
            1 point
            Parent

            The whole point of dark matter is to hold galaxies together through gravity. And it is posited as having exotic properties apart from gravity.

  • CriticalSteel2Mar 1, 2012, 8:40 PM
    −10 points

    Sigh He doesnt know critical thinking...

  • LaochNov 14, 2013, 12:25 PM
    1 point

    Is there such a thing as negative feeling death spirals, where say fear or mental illness keeps pushing you towards a terrible thought, concept or idea.

    • wizzwizz4Apr 22, 2019, 9:10 PM
      1 point
      Parent

      I think so. It’s a positive feedback loop either way.

  • cosApr 21, 2021, 6:30 AM
    2 points

    Applying this to my own beliefs, I seem to be trapped in an affective death spiral around science and rationality. In fact, just as you described, this spiral has led me to seek out new opportunities to apply and engage with science and rationality, shaping not just my career but the entirety of my life in the process. I have a feeling most folks around here can relate to these statements.

    So I wonder, are affective death spirals always a bad thing? More specifically, should they always be avoided? Do seemingly positive affective death spirals carry risk of negative externalities?

    One place where my own obsession with science and rationality seems to get in the way of things is in highly emotional interactions with other people. Often, my attempts to apply science and rationality to statements made during a heated argument simply make matters worse. Same goes when consoling a friend or partner about something sad; few in such situations are actually interested in applying the scientific method.

    Then again, I also used science and rationality to get out of this pattern. I noticed my default approach wasn’t working, came up with new approaches, and tested them in different situations as they arose. After evaluating the results, admittedly with little in the way of statistical analysis, I landed on a robust system for dealing with highly emotional interpersonal encounters. (The biggest hurdle has been actually remembering to use it rather than defaulting to what feels right according to the affective death spiral around science and rationality which rules my life.)

    Edit: I continued onto the next article in this series. I now feel surprisingly prescient and a little silly.

    • papetoastJan 4, 2023, 1:08 PM
      2 points
      Parent

      I also found it a good practice to generate your own answers to how you would escape the happy death spiral, before reading the next article.

      My answer:

      Remember that powerful theories are the ones that eliminates many options, not ones that explains everything.

      I think it is a reasonably good answer as it somewhat contains 3⁄5 of the points

      • Thinking about the specifics of the causal chain instead of the good or bad feelings;

      • Not rehearsing evidence; and

      • Not adding happiness from claims that “you can’t prove are wrong”;

  • ProductimothyJul 15, 2024, 8:15 PM
    1 point
    0

    This effect is really noticeable when you’re manic.

Back to top

Customize appearance

Current theme: default

Less Wrong (text)

Less Wrong (link)

Hi, I’m Bobby the Basilisk! Click on the minimize button () to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember, the changes won’t be saved until you click “OK”!)

Theme tweaker help

  