Guardian column on ugh fields, mentions LW
http://www.guardian.co.uk/lifeandstyle/2011/jul/08/change-your-life-ugh-fields
In 1920, in a jaw-droppingly unethical experiment that’s mainly remembered today as an example of how not to conduct a psychological study, John B Watson set out to prove a point about fear – using, as his guinea pig, an eight-month-old boy in a Baltimore hospital. Little Albert, as he became known, was taught to associate a white rat with a terrifying sound – a steel bar was struck with a hammer behind his back whenever he reached towards the animal – until, the story goes, he was terrified of anything white and furry: dogs, a coat, Watson in a Santa Claus costume. (Watson, apparently, intended to reverse the effect, but Albert was removed from the hospital before he could do so.) It would be entertaining to propose something similar to a university ethics committee today: they’d spring from their seats in horror, like Little Albert seeing a sheepskin rug.
In fact, most of the details of Little Albert’s “conditioning” have since been thrown into doubt. But something not too dissimilar afflicts many of us. When an experience gets associated with acute bad feelings, especially in childhood – being around dogs, say, or swimming pools, or moving house or money troubles – that category of thing can become fearsome for ever. But there’s an additional twist I hadn’t considered until I encountered it recently on the rationality blog lesswrong.com, where it’s termed an “ugh field”: what if one effect of finding some area of life particularly stress-inducing is that we get conditioned into not even thinking about it at all?
“A problem with the human mind is it’s a horrific kludge that will fail when you most need it not to,” writes one Less Wrong blogger, who argues that ugh fields are a case in point: “If a person receives constant negative conditioning via unhappy thoughts whenever their mind goes into a certain zone of thought, they will develop a psychological flinch mechanism around the thought.”
Suppose, in early adulthood, you have a few bad experiences with missed credit card bills and penalty fees. A rational person might resolve to think more about bills in future, to avoid repeat problems. But a fear-conditioned mind, erecting an ugh field around the subject, might become more forgetful with money, to avoid experiencing the emotions associated with the thought, thus making matters worse. (Another example: many people fail to take medicines they’ve been prescribed for life-threatening conditions. Could it be because they’d rather avoid thinking about having a life-threatening condition – even if that puts their lives at risk?) Worse, if the ugh field hypothesis is correct, the “flinch” occurs, by definition, before the thought enters your conscious mind. So even someone sincerely dedicated to confronting (say) their issues with money won’t have the opportunity: the ugh field will have screened it out pre-emptively.
This is all highly dispiriting, except in so far as it highlights a broader truth about fear: we’re not really afraid of events, but of experiencing the emotions associated with them. (“I’m only ever afraid of a feeling, never a task,” is how the blogger David Cain applies this to procrastination at raptitude.com – see is.gd/t3cRPZ.) Which is actually liberating, since the prospect of experiencing an unpleasant emotion is almost always more palatable than the prospect of Something Really Bad happening. If you can tolerate the feeling of “ugh”, there’s not much you can’t tolerate in life.
Burkeman has referenced stuff Eliezer has written more than once
Nitpick: most of those aren’t actual examples, and in some cases they don’t have both names on the actual page. (Tangentially, this one makes my brain hurt; not completely sure why.)
Hi, I found this place because of that Guardian article. Do you know who authored http://lesswrong.com/lw/21b/ugh_fields? It only reads [deleted], was the account of the author suspended for some reason? I might cite that article on a future occasion and want to give due credit. Thanks.
According to the wiki, it was Roko, who has since quit LW in order to eliminate a distraction from higher-order goals.
Please delete the parent. I would prefer people other than myself to be discouraged from declaring my real world name directly in the context of a post I had tried to remove. As such I will discourage others from doing similar and hope the norm sticks.
I’d consider it unnecessarily impolite to clearly link the real name of somebody to an article if the person has decided to “unlink” those works from one’s identity. The possibility of something for everybody does not implicate that reducing the difficulty for everybody is something one ought to do.
Name reference removed, and I managed to reread your post and notice you weren’t saying I should have already inferred that I was supposed to do that from the context here.
Edit: for the record, I probably wouldn’t have commented in the first place if the site didn’t require me to do it as much as possible to keep the ability to downvote.
Hi,
The author wasn’t suspended, but deleted his account about a year ago; as well as his other online presence; some quick googling couldn’t find his email address, maybe someone else has it.
The author is user:Roko and that it reads “deleted” means that he deleted his post so that only people who have the URL can view it. The reason for the deletion is an “ugh field” shared by many people here on lesswrong, better don’t ask.
You’re using a Roko algorithm! Well, you might be, anyway. Specifically, trying to resolve troubling internal tension by drumming up social drama in the hopes that some decisive external event will knock you into stability. However you don’t seem to be going out of your way to appear discreditable like he did, maybe because you don’t yet identify with the “x-rationalist” memeplex to as great an extent as Roko.
Similarly, the message you might be trying to send after it’s made explicit and reflected upon for a bit might be something like the following:
Maybe I’m partially projecting. I’m pretty sure I’m ranting at least.
Edit: Here’s a simplified concrete example of this (insightfully reported by Yvain so you know you want to click the link, it’s a comment with 74 karma, for seriously), but it’s everywhere, implicitly, constantly, without any reflection or any sense that something is terrifyingly disgustingly insanely wrongly completely barking mad. Or a subtler example from Less Wrong.
I am really really impressed. That is basically exactly right.
Well, I managed to get out of Jehovah’s Witnesses on my own. People who care strongly about their reputation within a community often fail that hurdle. Not that I want to draw any comparisons, I just want to highlight my personality. I never cared much about my social reputation, as long as it isn’t obviously instrumental.
I especially don’t identify with the utility monsters (i.e. people who call everything a bias and want to act like fictitious superintelligences). But I am generally interested to learn.
I endorse everything you wrote there. I don’t know how to deal with a certain topic I can’t talk about. I can’t ask anybody outside of this community either. Those who I asked just said its complete craziness.
On one side there is LW and then there is everyone else. Both sides call each other idiots. Those outside of LW just don’t seem knowledgeable or smart enough to tell me what to do, those insight of LW seem too crazy and are hold captive by a reputation system. I could try to figure it all out on my own, but the topic and the whole existential risk business is too distracting to allow me to devote my time to educate myself sufficiently.
Sure, I could just trust Eliezer based on his reputation. Maybe a perfect Bayesian agent would do that, I have no idea. But I don’t have enough trust in, and knowledge of the very methods that allow you to conclude that assertions by Eliezer are very likely to be true. Should I really not be reading a book like ‘Good and Real’ because it talks about something that I shouldn’t even think about? I can’t swallow that pill. Where do I draw the line? And how do I even avoid a topic that I am unable to pinpoint? I could “just” calculate the expected utility of thinking about the topic in and of itself and the utility of the consequences according to Eliezer. But as I wrote, I don’t trust those methods. The utility of some logical implications of someones vague assertions seem overly insufficient to take into account at all. Such thinking leads to Pascal’s Mugging scenarios and I am not willing to take that route yet. But at the same time all this is sufficiently distracting and disturbing that I can’t just ignore it either.
You people drive me crazy. A year of worries, do you think a few downvotes can make me shut up about that?
I don’t really think anyone here is insane, just overcredulous. The problem is that your memes are too damn efficient at making one distrust one’s own intuition.
See, back when I was a Jehovah’s Witness I was told that I have to do everything to make people aware of “the Truth” to save as many people as possible and in order to join the paradise myself. I was told that the current time doesn’t count, there will be infinitely more fun in future. I was also told not to read and think about certain topics because they will make me lose the paradise.
I thought I left all that behind, just to learn that there are atheists who believe exactly the same just using different labels. Even the “you have to believe” part is back in the form of “making decisions under uncertainty”, “uncertainty” that is as close to a “belief” that it doesn’t make much of a difference...
No, I am generally impressed by the level of insight regarding my personal motives. For how long have you thought about this? Or is it that obvious?
Good rationalists shouldn’t read Good and Real? Why not? Where is this argued?
It is not argued anywhere. Good and Real is a good book.
I think more people should be real superintelligences. By that I mean, be perfect. I would say “try to be like a superintelligence” but that’s just not right at all. But thinking about what perfection would look like, what wu wei would look like, moving elegantly, smiling peacefully, thinking clear flowing thoughts that cut away all delusions with their infinite sharpness, not chained by past selves, not pretending to be Atlas. Johan Liebert, except, ya know, not an insane serial killer with no seriously attainable goal. A Friendly Johan Liebert. Maybe that’s what I should aim for, seeing as Eliezer’s a wannabe Light Yagami apparently. My surname was once Liebert.
They both get Bayes points!
This statement prompted me to finally non-jokingly admit to myself that I’m a theist. I still don’t know if God is a point, ring, cyclic, or chaotic attractor, though, even metaphorically speaking… improper uniformish priors over universal prior languages, the set theoretic multiverse, category theory, analogy and equivalence, bleh. I should go to a Less Wrong meetup some time, it’ll be effing hilarious. Bwa ha ha. I should write a book, called “Neomonadology”, coauthor it with Mitchell Porter, edited by Steve Rayhawk, have it further edited and commented on by my philosopher colleagues. He could talk about extreme low-level physics, I could talk about extreme high-level cosmology, trade off chapters, meet in the middle contentwise (and end pagewise) at decision theory, talk about ontology of agency, preferences as knowledge-processes embedded in time, reversible computation, some quantum thought problems for reflective decision theory, some acausal thought problems for reflective decision theory, go back in time and rewrite it using Hofstadter magicks, bam, published, most interesting book ever, acausal fame and recognition.
More unasked for advice: Τώ ξιφεί τόν δεσμό λελύσθαι
By that I mean, you are stressed because you are a faced with an intractable knot, so what you really need to do is optimize your knot-undoing procedure. That is, study epistemic rationality, and ignore all that instrumental rationality bullshit. There are but six basic rules of instrumental rationality, and all require nigh-infinitely strong epistemic rationality: figure out who or what you are, figure out who or what you affect/effect, figure out who or what you and the things you affect value or are affected by or what they ‘should’ value or be affected by, meta-optimize, meta-optimize, meta-optimize. Those are all extremely hard and all much more important than any object-level policy decision. You are in infinite contexts controlling infinite things, think big. Get closer to God. Optimize your strategy, never your choice. That insight coincidentally doubles in a different context as being the heart of TDT.
Someone suggested a few weeks ago that you were exhibiting Roko-like tension-resolution behaviors. I didn’t really think about it much at the time. But the context came up a few comments above where you were talking about Roko and that primed me, and from there it’s pretty easy to fill in a lot of details.
The longer version ends the same way but starts with: About a month ago there was a phase transition from a fluid jumble of ideas to a crystalline semi-coherent vocabulary for thinking and talking about social psychology, though of course the inchoate intuitions had been there for many years. Recently I’ve adopted Steve Rayhawk’s style of social analysis: making everything explicit, always going meta and going meta about going meta, distinguishing between wants/virtues and double-negative wants/virtues, emphasizing the importance of concessions and demands of concessions, et cetera. I think I focus on contempt qua contempt somewhat more than he does, he probably has much finer language for that than I do since it’s incredibly important to model correctly if one is reasoning about social epistemology, which is itself an incredibly important thing to reason about correctly. Anyway I’ve learned a lot from Steve.
I remember being tempted to reply to your original comment RE Roko with just ”/facepalm” and take the −4 karma hit for the lulz but I figured it was a decent opportunity to, ya know, not troll for once. But there’s something twistedly satisfying about saying something you know will be dismissed for reasons that it would be easy for you to demonstrate are unvirtuous, unreflective, and unsophisticated. Steven Kaas (User:steven0461, Black Belt Bayesian) IMed me a few days ago:
I made a decision. I am going to log out and come back in 5 years. Until then I am going to devote all my time to my personal education.
If you think that any of my submissions might have strong negative effects you can edit or delete them. I will not react to any editing or deletion.
Prediction registered: http://predictionbook.com/predictions/2909
Prediction over...
60%?! That a regular user will abstain from an addictive site for about twice its current age? A site about a topic he’s obsessed with? I’ll take that bet.
(Made my own 5% prediction.)
My reasoning was along the lines of ‘well, now he’s publicly committed to it and would be ashamed to make a comment or post’ and that LW can be something of a habit—and once habits are broken, they’re easy to continue to not engage in. (For example, I do not have the habit of smoking, and I suspect I will have ~100% success in continuing to not smoke over the next 5 years.)
Although note I slightly cheat by specifying posts and comments—so he could engage in private messages or voting on comments & posts, and I would not count that as a falsification of the prediction.
My impression is that XiXiDu has been talking about needing to study more and leaving LW / utility considerations for quite some time now. I don’t think he even can make serious commitments right now. He didn’t even delete his livejournal yet.
Neither would I. Coming back under a new name would count, though.
Mm. Well, we shall see. Not deleting LJ isn’t a warning signal for me—having LJ can encourage your studying (‘what do I write up today?’) which LW doesn’t necessarily (‘what do I read on LW today?’).
Good point; I’ll clarify that when I say ‘XiXiDu’ in the prediction, I mean the underlying person and not the specific LW account.
Why did you change your mind?
If you actually read everything you post to twitter, you’re among the fastest self-educators I know of. Doing 5 years of learning at that rate, without feedback on your learning, could include a lot of sub-optimal paths. Of course, the tradeoff is that the feedback you get may or may not help you optimize your learning for your actual goals.
I’m not sure how to interpret that quote by Steven Kaas, given that he is downvoted extremely rarely. I count 3 LW comments with negative points (-1, −1, −2) from User:steven0461 out of more than 700. (I also wanted to comment because people reading your quote might form the impression that Steven is someone who is often downvoted and usually interprets those downvotes as evidence of other people being wrong.)
It’s a joke. (“Them” turns out not to have the expected antecedent.)
Or perhaps one should stop distracting oneself with stupid abstract knots altogether and instead revolt against the prefrontal cortical overmind, as I have previously accidentally-argued while on the boundary between dreams and wakefulness:
If this makes no sense to you that’s probably a good thing.
Does this mean that a type of suffering you and some others endure, such as OCD-type thought patterns, primes the understanding of that paragraph?
Also, is there a collection of all Kaasisms somewhere? He’s pretty much my favorite humorist these days, and the suspicion that there’s far more of those incisive aphorisms than he publishes to twitter is going to haunt me with visions of unrealized enjoyment.
I recommend against it for that secondarily, but primarily because it probabilistically implies an overly lax conception of “understanding” and an unacceptably high tolerance for hard-to-test just-so speculation. (And if someone really understood what sort of themes I was getting at, they’d know that my disclaimer didn’t apply to them.) Edit: When I say “I recommend against it for that secondarily”, what I mean is, “sure, that sounds like a decent reason, and I guess it’s sort of possible that I implicitly thought of it at the time of writing”. Another equally plausible secondary reason would be that I was signalling that I wasn’t falling for the potential errors that primarily caused me to write the disclaimer in the first place.
I don’t think so, but you could read the entirety of his blog Black Belt Bayesian, or move to Chicago and try to win his favor at LW meetups by talking about the importance of thinking on the margin, or maybe pay him by the hour to be funny, or something. If I was assembling a team of 9 FAI programmers I’d probably hire Steven Kaas on the grounds that he is obviously somehow necessary.
Accidentally saw an image macro that’s a partial tl;dr of this: http://knowyourmeme.com/photos/211139-scumbag-brain
Yay scumbag brain. To be fair, though, I should admit I’m not exactly the least biased assessor of the prefrontal cortex. http://lesswrong.com/lw/b9/welcome_to_less_wrong/5jht
Agree, I hate that too. When that happens to me I just repeat it in different places until someone finally explains how I am wrong and just accept the karma hit. I have no idea what people are thinking who just downvote. If I knew that I was wrong, or how I was wrong, I wouldn’t have written the comment/post after all.
I think ugh field is the wrong term. A better description would be that he separately brought up a topic that we know from experience ends up being extremely contentious and non-productive, so we try to avoid discussing it. He then regretted doing so and as a result deleted a large chunk of his own posts, including several like this one that were quite insightful. Roko deleting the posts was probably overkill, but there you have it.
Wow, that really gives a distorted picture of what happened.
A better description would be to say that he brought up a topic that some people, including Eliezer Yudkowsky, believe can cause negative effects by virtue of people merely thinking about it.
And Roko himself now. (source: 1 2)
I am pretty sure that, though Roko wrote up the post, the naming and specific conceptualization of “ugh fields” was originally a product of the thinking of JenniferRM, AnnaSalamon, and probably others—though my memory is rather vague at this point. Just to give some probabilistic credit where it’s due.
Footnote at the bottom.
+1 to my memory, −2 to my scanning abilities. Thanks.
I am noticing that I am very, very confused. What is so controversial about ugh fields? Why is this a Banned Idea? I was somehow able to read the original article (didn’t even notice it was deleted, I must have found a link to the original URL) and it seemed uncontroversial to me. Or is there a different ‘Banned Idea’ that I’m completely missing?
This, I think. I don’t think there was anything controversial about the ugh fields post; it’s gone because Roko wrote it and he deleted a bunch of his posts in the wake of an argument about that different Banned Idea.
Yeah, I was talking about that Banned Idea, which is totally unrelated to ugh fields and has to do with the perils of AI.
I don’t understand. What the hell prompted everyone to suddenly discuss the Banned Idea all at once?
I’m going to work up a theory about sticky associations. The short version is that, in addition to bannedness making things fascinating to many people (and the sort who like LW are probably less compliant about such things than the general population), the mere mention of anything associated with the idea (like Roko’s name) is going to bring it back.
now the Guardian is mentioned on LW too, that could really start an infinite loop that takes over some of cyber space’s space
ok, gone