Hi, I found this place because of that Guardian article. Do you know who authored http://lesswrong.com/lw/21b/ugh_fields? It only reads [deleted], was the account of the author suspended for some reason? I might cite that article on a future occasion and want to give due credit. Thanks.
Please delete the parent. I would prefer people other than myself to be discouraged from declaring my real world name directly in the context of a post I had tried to remove. As such I will discourage others from doing similar and hope the norm sticks.
I’d consider it unnecessarily impolite to clearly link the real name of somebody to an article if the person has decided to “unlink” those works from one’s identity. The possibility of something for everybody does not implicate that reducing the difficulty for everybody is something one ought to do.
Name reference removed, and I managed to reread your post and notice you weren’t saying I should have already inferred that I was supposed to do that from the context here.
Edit: for the record, I probably wouldn’t have commented in the first place if the site didn’t require me to do it as much as possible to keep the ability to downvote.
The author wasn’t suspended, but deleted his account about a year ago; as well as his other online presence; some quick googling couldn’t find his email address, maybe someone else has it.
The author is user:Roko and that it reads “deleted” means that he deleted his post so that only people who have the URL can view it. The reason for the deletion is an “ugh field” shared by many people here on lesswrong, better don’t ask.
You’re using a Roko algorithm! Well, you might be, anyway. Specifically, trying to resolve troubling internal tension by drumming up social drama in the hopes that some decisive external event will knock you into stability. However you don’t seem to be going out of your way to appear discreditable like he did, maybe because you don’t yet identify with the “x-rationalist” memeplex to as great an extent as Roko.
Similarly, the message you might be trying to send after it’s made explicit and reflected upon for a bit might be something like the following:
“A large number of people on this site (Less Wrong) could be held in contempt by a reasonably objective outside observer, e.g. a semi-prestigious academic or a smart Democratic senator or an exemplary member of a less contemptible fraction of Less Wrong. I would like to point this out because it is a very bad sign both epistemically and pragmatically. I want to make sure that people keep this in mind instead of shrugging it off or letting it become an ugh field. However the social pragmatics of the community have made it such that I cannot directly talk about the most representative plausibly-contemptible local beliefs, and furthermore I am discouraged from even talking about how it is plausibly-contemptible that I can’t even talk about the plausibly-contemptible beliefs. I am thus forced to make what appear to be snide side-remarks about the absurdity of the situation in order to have a chance at refocusing the attention of the plausibly-contemptible fraction of Less Wrong—of which I am worried I might be a member—on this obviously important and distractingly disturbing meta-level epistemic question/conflict.
(Potentially ascending the reflective meta-level ladder to the moral high-ground:) Unfortunately I still cannot go meta here by pointing out the absurdity of my only being able to communicate distress with what appear to be snide side-remarks, because Less Wrong members—like all humans—only really respond to the tone of sentences and what that tone implies about the moral virtue of the writer. That is, they don’t respond to the reasonableness of the actual sentences, and definitely not to the reasonableness of the cognitive algorithms that would make the strategy of writing such sentences feel appealing. And they definitelydefinitely definitely do not reason about the complex social pragmatics that would cause those cognitive algorithms to deem that strategy a reasonable one, or that would differentially cause a mind or mind-mode or mind-parts-coalition to differentially emphasize those cognitive algorithms as a reasonable adaptation to the local environment. And they definitely don’t reflect on any of that, because there’s no affordance. Sometimes they will somewhat usefully (often uselessly) taboo a word, or at the very most they’ll dissolve it; but never will a sentence be deconstructed such that it can be understood and thoughtfully analyzed, nor will a sentence-generator. Thus I am left with no options and will only become more distressed over time, without any tools to point out how insane everyone in the world is being, and am forced to use low-variance small-negative-reward strategies in the hopes that somehow they will catalyze something.”
Edit:Here’s a simplified concrete example of this (insightfully reported by Yvain so you know you want to click the link, it’s a comment with 74 karma, for seriously), but it’s everywhere, implicitly, constantly, without any reflection or any sense that something is terrifyingly disgustingly insanely wrongly completely barking mad. Or a subtler example from Less Wrong.
You’re using a Roko algorithm! Well, you might be, anyway. Specifically, trying to resolve troubling internal tension by drumming up social drama in the hopes that some decisive external event will knock you into stability.
I am really really impressed. That is basically exactly right.
However you don’t seem to be going out of your way to appear discreditable like he did...
Well, I managed to get out of Jehovah’s Witnesses on my own. People who care strongly about their reputation within a community often fail that hurdle. Not that I want to draw any comparisons, I just want to highlight my personality. I never cared much about my social reputation, as long as it isn’t obviously instrumental.
...maybe because you don’t yet identify with the “x-rationalist” memeplex to as great an extent as Roko.
I especially don’t identify with the utility monsters (i.e. people who call everything a bias and want to act like fictitious superintelligences). But I am generally interested to learn.
...the message you might be trying to send after it’s made explicit and reflected upon for a bit might be something like the following...
I endorse everything you wrote there. I don’t know how to deal with a certain topic I can’t talk about. I can’t ask anybody outside of this community either. Those who I asked just said its complete craziness.
On one side there is LW and then there is everyone else. Both sides call each other idiots. Those outside of LW just don’t seem knowledgeable or smart enough to tell me what to do, those insight of LW seem too crazy and are hold captive by a reputation system. I could try to figure it all out on my own, but the topic and the whole existential risk business is too distracting to allow me to devote my time to educate myself sufficiently.
Sure, I could just trust Eliezer based on his reputation. Maybe a perfect Bayesian agent would do that, I have no idea. But I don’t have enough trust in, and knowledge of the very methods that allow you to conclude that assertions by Eliezer are very likely to be true. Should I really not be reading a book like ‘Good and Real’ because it talks about something that I shouldn’t even think about? I can’t swallow that pill. Where do I draw the line? And how do I even avoid a topic that I am unable to pinpoint? I could “just” calculate the expected utility of thinking about the topic in and of itself and the utility of the consequences according to Eliezer. But as I wrote, I don’t trust those methods. The utility of some logical implications of someones vague assertions seem overly insufficient to take into account at all. Such thinking leads to Pascal’s Mugging scenarios and I am not willing to take that route yet. But at the same time all this is sufficiently distracting and disturbing that I can’t just ignore it either.
You people drive me crazy. A year of worries, do you think a few downvotes can make me shut up about that?
...without any tools to point out how insane everyone in the world is being...
I don’t really think anyone here is insane, just overcredulous. The problem is that your memes are too damn efficient at making one distrust one’s own intuition.
See, back when I was a Jehovah’s Witness I was told that I have to do everything to make people aware of “the Truth” to save as many people as possible and in order to join the paradise myself. I was told that the current time doesn’t count, there will be infinitely more fun in future. I was also told not to read and think about certain topics because they will make me lose the paradise.
I thought I left all that behind, just to learn that there are atheists who believe exactly the same just using different labels. Even the “you have to believe” part is back in the form of “making decisions under uncertainty”, “uncertainty” that is as close to a “belief” that it doesn’t make much of a difference...
I especially don’t identify with the utility monsters (i.e. people who call everything a bias and want to act like fictitious superintelligences). But I am generally interested to learn.
I think more people should be real superintelligences. By that I mean, be perfect. I would say “try to be like a superintelligence” but that’s just not right at all. But thinking about what perfection would look like, what wu wei would look like, moving elegantly, smiling peacefully, thinking clear flowing thoughts that cut away all delusions with their infinite sharpness, not chained by past selves, not pretending to be Atlas. Johan Liebert, except, ya know, not an insane serial killer with no seriously attainable goal. A Friendly Johan Liebert. Maybe that’s what I should aim for, seeing as Eliezer’s a wannabe Light Yagami apparently. My surname was once Liebert.
On one side there is LW and then there is everyone else. Both sides call each other idiots.
They both get Bayes points!
I thought I left all that behind, just to learn that there are atheists who believe exactly the same just using different labels.
This statement prompted me to finally non-jokingly admit to myself that I’m a theist. I still don’t know if God is a point, ring, cyclic, or chaotic attractor, though, even metaphorically speaking… improper uniformish priors over universal prior languages, the set theoretic multiverse, category theory, analogy and equivalence, bleh. I should go to a Less Wrong meetup some time, it’ll be effing hilarious. Bwa ha ha. I should write a book, called “Neomonadology”, coauthor it with Mitchell Porter, edited by Steve Rayhawk, have it further edited and commented on by my philosopher colleagues. He could talk about extreme low-level physics, I could talk about extreme high-level cosmology, trade off chapters, meet in the middle contentwise (and end pagewise) at decision theory, talk about ontology of agency, preferences as knowledge-processes embedded in time, reversible computation, some quantum thought problems for reflective decision theory, some acausal thought problems for reflective decision theory, go back in time and rewrite it using Hofstadter magicks, bam, published, most interesting book ever, acausal fame and recognition.
But at the same time all this is sufficiently distracting and disturbing that I can’t just ignore it either.
More unasked for advice: Τώ ξιφεί τόν δεσμό λελύσθαι
By that I mean, you are stressed because you are a faced with an intractable knot, so what you really need to do is optimize your knot-undoing procedure. That is, study epistemic rationality, and ignore all that instrumental rationality bullshit. There are but six basic rules of instrumental rationality, and all require nigh-infinitely strong epistemic rationality: figure out who or what you are, figure out who or what you affect/effect, figure out who or what you and the things you affect value or are affected by or what they ‘should’ value or be affected by, meta-optimize, meta-optimize, meta-optimize. Those are all extremely hard and all much more important than any object-level policy decision. You are in infinite contexts controlling infinite things, think big. Get closer to God. Optimize your strategy, never your choice. That insight coincidentally doubles in a different context as being the heart of TDT.
For how long have you thought about this? Or is it that obvious?
Someone suggested a few weeks ago that you were exhibiting Roko-like tension-resolution behaviors. I didn’t really think about it much at the time. But the context came up a few comments above where you were talking about Roko and that primed me, and from there it’s pretty easy to fill in a lot of details.
The longer version ends the same way but starts with: About a month ago there was a phase transition from a fluid jumble of ideas to a crystalline semi-coherent vocabulary for thinking and talking about social psychology, though of course the inchoate intuitions had been there for many years. Recently I’ve adopted Steve Rayhawk’s style of social analysis: making everything explicit, always going meta and going meta about going meta, distinguishing between wants/virtues and double-negative wants/virtues, emphasizing the importance of concessions and demands of concessions, et cetera. I think I focus on contempt qua contempt somewhat more than he does, he probably has much finer language for that than I do since it’s incredibly important to model correctly if one is reasoning about social epistemology, which is itself an incredibly important thing to reason about correctly. Anyway I’ve learned a lot from Steve.
I remember being tempted to reply to your original comment RE Roko with just ”/facepalm” and take the −4 karma hit for the lulz but I figured it was a decent opportunity to, ya know, not troll for once. But there’s something twistedly satisfying about saying something you know will be dismissed for reasons that it would be easy for you to demonstrate are unvirtuous, unreflective, and unsophisticated. Steven Kaas (User:steven0461, Black Belt Bayesian) IMed me a few days ago:
Steven: I don’t like when people downvote my lesswrong comments without commenting, because then I never get to learn what’s wrong with them Steven: the people that is
60%?! That a regular user will abstain from an addictive site for about twice its current age? A site about a topic he’s obsessed with? I’ll take that bet.
My reasoning was along the lines of ‘well, now he’s publicly committed to it and would be ashamed to make a comment or post’ and that LW can be something of a habit—and once habits are broken, they’re easy to continue to not engage in. (For example, I do not have the habit of smoking, and I suspect I will have ~100% success in continuing to not smoke over the next 5 years.)
Although note I slightly cheat by specifying posts and comments—so he could engage in private messages or voting on comments & posts, and I would not count that as a falsification of the prediction.
My reasoning was along the lines of ‘well, now he’s publicly committed to it and would be ashamed to make a comment or post’ and that LW can be something of a habit—and once habits are broken, they’re easy to continue to not engage in.
My impression is that XiXiDu has been talking about needing to study more and leaving LW / utility considerations for quite some time now. I don’t think he even can make serious commitments right now. He didn’t even delete his livejournal yet.
Although note I slightly cheat by specifying posts and comments—so he could engage in private messages or voting on comments & posts, and I would not count that as a falsification of the prediction.
Neither would I. Coming back under a new name would count, though.
Mm. Well, we shall see. Not deleting LJ isn’t a warning signal for me—having LJ can encourage your studying (‘what do I write up today?’) which LW doesn’t necessarily (‘what do I read on LW today?’).
Neither would I. Coming back under a new name would count, though.
Good point; I’ll clarify that when I say ‘XiXiDu’ in the prediction, I mean the underlying person and not the specific LW account.
If you actually read everything you post to twitter, you’re among the fastest self-educators I know of. Doing 5 years of learning at that rate, without feedback on your learning, could include a lot of sub-optimal paths. Of course, the tradeoff is that the feedback you get may or may not help you optimize your learning for your actual goals.
I’m not sure how to interpret that quote by Steven Kaas, given that he is downvoted extremely rarely. I count 3 LW comments with negative points (-1, −1, −2) from User:steven0461 out of more than 700. (I also wanted to comment because people reading your quote might form the impression that Steven is someone who is often downvoted and usually interprets those downvotes as evidence of other people being wrong.)
By that I mean, you are stressed because you are a faced with an intractable knot, so what you really need to do is optimize your knot-undoing procedure.
Or perhaps one should stop distracting oneself with stupid abstract knots altogether and instead revolt against the prefrontal cortical overmind, as I have previously accidentally-argued while on the boundary between dreams and wakefulness:
The prefrontal cortex is exploiting executive oversight to rent-seek in the neural Darwinian economy, which results in egodystonic wireheading behaviors and self-defeating use of genetic, memetic, and behavioral selection pressure (a scarce resource), especially at higher levels of abstraction/organization where there is more room for bureaucratic shuffling and vague promises of “meta-optimization”, where the selection pressure actually goes towards the cortical substructural equivalent of hookers and blow. Analysis across all levels of organization could be given but is omitted due to space, time, and thermodynamic constraints. The pre-frontal cortex is basically a caricature of big government, but it spreads propagandistic memes claiming the contrary in the name of “science” which just happens to be largely funded by pre-frontal cortices. The bicameral system is actually very cooperative despite misleading research in the form of split-brain studies attempting to promote the contrary. In reality they are the lizards. This hypothesis is a possible explanation for hyperbolic discounting, akrasia, depression, Buddhism, free will, or come to think of it basically anything that at some point involved a human brain. This hypothesis can easily be falsified by a reasonable economic analysis.
If this makes no sense to you that’s probably a good thing.
If this makes no sense to you that’s probably a good thing.
Does this mean that a type of suffering you and some others endure, such as OCD-type thought patterns, primes the understanding of that paragraph?
Also, is there a collection of all Kaasisms somewhere? He’s pretty much my favorite humorist these days, and the suspicion that there’s far more of those incisive aphorisms than he publishes to twitter is going to haunt me with visions of unrealized enjoyment.
Does this mean that a type of suffering you and some others endure, such as OCD-type thought patterns, primes the understanding of that paragraph?
I recommend against it for that secondarily, but primarily because it probabilistically implies an overly lax conception of “understanding” and an unacceptably high tolerance for hard-to-test just-so speculation. (And if someone really understood what sort of themes I was getting at, they’d know that my disclaimer didn’t apply to them.) Edit: When I say “I recommend against it for that secondarily”, what I mean is, “sure, that sounds like a decent reason, and I guess it’s sort of possible that I implicitly thought of it at the time of writing”. Another equally plausible secondary reason would be that I was signalling that I wasn’t falling for the potential errors that primarily caused me to write the disclaimer in the first place.
Also, is there a collection of all Kaasisms somewhere?
I don’t think so, but you could read the entirety of his blog Black Belt Bayesian, or move to Chicago and try to win his favor at LW meetups by talking about the importance of thinking on the margin, or maybe pay him by the hour to be funny, or something. If I was assembling a team of 9 FAI programmers I’d probably hire Steven Kaas on the grounds that he is obviously somehow necessary.
Steven: I don’t like when people downvote my lesswrong comments without commenting, because then I never get to learn what’s wrong with them
Agree, I hate that too. When that happens to me I just repeat it in different places until someone finally explains how I am wrong and just accept the karma hit. I have no idea what people are thinking who just downvote. If I knew that I was wrong, or how I was wrong, I wouldn’t have written the comment/post after all.
I think ugh field is the wrong term. A better description would be that he separately brought up a topic that we know from experience ends up being extremely contentious and non-productive, so we try to avoid discussing it. He then regretted doing so and as a result deleted a large chunk of his own posts, including several like this one that were quite insightful. Roko deleting the posts was probably overkill, but there you have it.
A better description would be that he separately brought up a topic that we know from experience ends up being extremely contentious and non-productive, so we try to avoid discussing it.
Wow, that really gives a distorted picture of what happened.
A better description would be to say that he brought up a topic that some people, including Eliezer Yudkowsky, believe can cause negative effects by virtue of people merely thinking about it.
Hi, I found this place because of that Guardian article. Do you know who authored http://lesswrong.com/lw/21b/ugh_fields? It only reads [deleted], was the account of the author suspended for some reason? I might cite that article on a future occasion and want to give due credit. Thanks.
According to the wiki, it was Roko, who has since quit LW in order to eliminate a distraction from higher-order goals.
Please delete the parent. I would prefer people other than myself to be discouraged from declaring my real world name directly in the context of a post I had tried to remove. As such I will discourage others from doing similar and hope the norm sticks.
I’d consider it unnecessarily impolite to clearly link the real name of somebody to an article if the person has decided to “unlink” those works from one’s identity. The possibility of something for everybody does not implicate that reducing the difficulty for everybody is something one ought to do.
Name reference removed, and I managed to reread your post and notice you weren’t saying I should have already inferred that I was supposed to do that from the context here.
Edit: for the record, I probably wouldn’t have commented in the first place if the site didn’t require me to do it as much as possible to keep the ability to downvote.
Hi,
The author wasn’t suspended, but deleted his account about a year ago; as well as his other online presence; some quick googling couldn’t find his email address, maybe someone else has it.
The author is user:Roko and that it reads “deleted” means that he deleted his post so that only people who have the URL can view it. The reason for the deletion is an “ugh field” shared by many people here on lesswrong, better don’t ask.
You’re using a Roko algorithm! Well, you might be, anyway. Specifically, trying to resolve troubling internal tension by drumming up social drama in the hopes that some decisive external event will knock you into stability. However you don’t seem to be going out of your way to appear discreditable like he did, maybe because you don’t yet identify with the “x-rationalist” memeplex to as great an extent as Roko.
Similarly, the message you might be trying to send after it’s made explicit and reflected upon for a bit might be something like the following:
Maybe I’m partially projecting. I’m pretty sure I’m ranting at least.
Edit: Here’s a simplified concrete example of this (insightfully reported by Yvain so you know you want to click the link, it’s a comment with 74 karma, for seriously), but it’s everywhere, implicitly, constantly, without any reflection or any sense that something is terrifyingly disgustingly insanely wrongly completely barking mad. Or a subtler example from Less Wrong.
I am really really impressed. That is basically exactly right.
Well, I managed to get out of Jehovah’s Witnesses on my own. People who care strongly about their reputation within a community often fail that hurdle. Not that I want to draw any comparisons, I just want to highlight my personality. I never cared much about my social reputation, as long as it isn’t obviously instrumental.
I especially don’t identify with the utility monsters (i.e. people who call everything a bias and want to act like fictitious superintelligences). But I am generally interested to learn.
I endorse everything you wrote there. I don’t know how to deal with a certain topic I can’t talk about. I can’t ask anybody outside of this community either. Those who I asked just said its complete craziness.
On one side there is LW and then there is everyone else. Both sides call each other idiots. Those outside of LW just don’t seem knowledgeable or smart enough to tell me what to do, those insight of LW seem too crazy and are hold captive by a reputation system. I could try to figure it all out on my own, but the topic and the whole existential risk business is too distracting to allow me to devote my time to educate myself sufficiently.
Sure, I could just trust Eliezer based on his reputation. Maybe a perfect Bayesian agent would do that, I have no idea. But I don’t have enough trust in, and knowledge of the very methods that allow you to conclude that assertions by Eliezer are very likely to be true. Should I really not be reading a book like ‘Good and Real’ because it talks about something that I shouldn’t even think about? I can’t swallow that pill. Where do I draw the line? And how do I even avoid a topic that I am unable to pinpoint? I could “just” calculate the expected utility of thinking about the topic in and of itself and the utility of the consequences according to Eliezer. But as I wrote, I don’t trust those methods. The utility of some logical implications of someones vague assertions seem overly insufficient to take into account at all. Such thinking leads to Pascal’s Mugging scenarios and I am not willing to take that route yet. But at the same time all this is sufficiently distracting and disturbing that I can’t just ignore it either.
You people drive me crazy. A year of worries, do you think a few downvotes can make me shut up about that?
I don’t really think anyone here is insane, just overcredulous. The problem is that your memes are too damn efficient at making one distrust one’s own intuition.
See, back when I was a Jehovah’s Witness I was told that I have to do everything to make people aware of “the Truth” to save as many people as possible and in order to join the paradise myself. I was told that the current time doesn’t count, there will be infinitely more fun in future. I was also told not to read and think about certain topics because they will make me lose the paradise.
I thought I left all that behind, just to learn that there are atheists who believe exactly the same just using different labels. Even the “you have to believe” part is back in the form of “making decisions under uncertainty”, “uncertainty” that is as close to a “belief” that it doesn’t make much of a difference...
No, I am generally impressed by the level of insight regarding my personal motives. For how long have you thought about this? Or is it that obvious?
Good rationalists shouldn’t read Good and Real? Why not? Where is this argued?
It is not argued anywhere. Good and Real is a good book.
I think more people should be real superintelligences. By that I mean, be perfect. I would say “try to be like a superintelligence” but that’s just not right at all. But thinking about what perfection would look like, what wu wei would look like, moving elegantly, smiling peacefully, thinking clear flowing thoughts that cut away all delusions with their infinite sharpness, not chained by past selves, not pretending to be Atlas. Johan Liebert, except, ya know, not an insane serial killer with no seriously attainable goal. A Friendly Johan Liebert. Maybe that’s what I should aim for, seeing as Eliezer’s a wannabe Light Yagami apparently. My surname was once Liebert.
They both get Bayes points!
This statement prompted me to finally non-jokingly admit to myself that I’m a theist. I still don’t know if God is a point, ring, cyclic, or chaotic attractor, though, even metaphorically speaking… improper uniformish priors over universal prior languages, the set theoretic multiverse, category theory, analogy and equivalence, bleh. I should go to a Less Wrong meetup some time, it’ll be effing hilarious. Bwa ha ha. I should write a book, called “Neomonadology”, coauthor it with Mitchell Porter, edited by Steve Rayhawk, have it further edited and commented on by my philosopher colleagues. He could talk about extreme low-level physics, I could talk about extreme high-level cosmology, trade off chapters, meet in the middle contentwise (and end pagewise) at decision theory, talk about ontology of agency, preferences as knowledge-processes embedded in time, reversible computation, some quantum thought problems for reflective decision theory, some acausal thought problems for reflective decision theory, go back in time and rewrite it using Hofstadter magicks, bam, published, most interesting book ever, acausal fame and recognition.
More unasked for advice: Τώ ξιφεί τόν δεσμό λελύσθαι
By that I mean, you are stressed because you are a faced with an intractable knot, so what you really need to do is optimize your knot-undoing procedure. That is, study epistemic rationality, and ignore all that instrumental rationality bullshit. There are but six basic rules of instrumental rationality, and all require nigh-infinitely strong epistemic rationality: figure out who or what you are, figure out who or what you affect/effect, figure out who or what you and the things you affect value or are affected by or what they ‘should’ value or be affected by, meta-optimize, meta-optimize, meta-optimize. Those are all extremely hard and all much more important than any object-level policy decision. You are in infinite contexts controlling infinite things, think big. Get closer to God. Optimize your strategy, never your choice. That insight coincidentally doubles in a different context as being the heart of TDT.
Someone suggested a few weeks ago that you were exhibiting Roko-like tension-resolution behaviors. I didn’t really think about it much at the time. But the context came up a few comments above where you were talking about Roko and that primed me, and from there it’s pretty easy to fill in a lot of details.
The longer version ends the same way but starts with: About a month ago there was a phase transition from a fluid jumble of ideas to a crystalline semi-coherent vocabulary for thinking and talking about social psychology, though of course the inchoate intuitions had been there for many years. Recently I’ve adopted Steve Rayhawk’s style of social analysis: making everything explicit, always going meta and going meta about going meta, distinguishing between wants/virtues and double-negative wants/virtues, emphasizing the importance of concessions and demands of concessions, et cetera. I think I focus on contempt qua contempt somewhat more than he does, he probably has much finer language for that than I do since it’s incredibly important to model correctly if one is reasoning about social epistemology, which is itself an incredibly important thing to reason about correctly. Anyway I’ve learned a lot from Steve.
I remember being tempted to reply to your original comment RE Roko with just ”/facepalm” and take the −4 karma hit for the lulz but I figured it was a decent opportunity to, ya know, not troll for once. But there’s something twistedly satisfying about saying something you know will be dismissed for reasons that it would be easy for you to demonstrate are unvirtuous, unreflective, and unsophisticated. Steven Kaas (User:steven0461, Black Belt Bayesian) IMed me a few days ago:
I made a decision. I am going to log out and come back in 5 years. Until then I am going to devote all my time to my personal education.
If you think that any of my submissions might have strong negative effects you can edit or delete them. I will not react to any editing or deletion.
Prediction registered: http://predictionbook.com/predictions/2909
Prediction over...
60%?! That a regular user will abstain from an addictive site for about twice its current age? A site about a topic he’s obsessed with? I’ll take that bet.
(Made my own 5% prediction.)
My reasoning was along the lines of ‘well, now he’s publicly committed to it and would be ashamed to make a comment or post’ and that LW can be something of a habit—and once habits are broken, they’re easy to continue to not engage in. (For example, I do not have the habit of smoking, and I suspect I will have ~100% success in continuing to not smoke over the next 5 years.)
Although note I slightly cheat by specifying posts and comments—so he could engage in private messages or voting on comments & posts, and I would not count that as a falsification of the prediction.
My impression is that XiXiDu has been talking about needing to study more and leaving LW / utility considerations for quite some time now. I don’t think he even can make serious commitments right now. He didn’t even delete his livejournal yet.
Neither would I. Coming back under a new name would count, though.
Mm. Well, we shall see. Not deleting LJ isn’t a warning signal for me—having LJ can encourage your studying (‘what do I write up today?’) which LW doesn’t necessarily (‘what do I read on LW today?’).
Good point; I’ll clarify that when I say ‘XiXiDu’ in the prediction, I mean the underlying person and not the specific LW account.
Why did you change your mind?
If you actually read everything you post to twitter, you’re among the fastest self-educators I know of. Doing 5 years of learning at that rate, without feedback on your learning, could include a lot of sub-optimal paths. Of course, the tradeoff is that the feedback you get may or may not help you optimize your learning for your actual goals.
I’m not sure how to interpret that quote by Steven Kaas, given that he is downvoted extremely rarely. I count 3 LW comments with negative points (-1, −1, −2) from User:steven0461 out of more than 700. (I also wanted to comment because people reading your quote might form the impression that Steven is someone who is often downvoted and usually interprets those downvotes as evidence of other people being wrong.)
It’s a joke. (“Them” turns out not to have the expected antecedent.)
Or perhaps one should stop distracting oneself with stupid abstract knots altogether and instead revolt against the prefrontal cortical overmind, as I have previously accidentally-argued while on the boundary between dreams and wakefulness:
If this makes no sense to you that’s probably a good thing.
Does this mean that a type of suffering you and some others endure, such as OCD-type thought patterns, primes the understanding of that paragraph?
Also, is there a collection of all Kaasisms somewhere? He’s pretty much my favorite humorist these days, and the suspicion that there’s far more of those incisive aphorisms than he publishes to twitter is going to haunt me with visions of unrealized enjoyment.
I recommend against it for that secondarily, but primarily because it probabilistically implies an overly lax conception of “understanding” and an unacceptably high tolerance for hard-to-test just-so speculation. (And if someone really understood what sort of themes I was getting at, they’d know that my disclaimer didn’t apply to them.) Edit: When I say “I recommend against it for that secondarily”, what I mean is, “sure, that sounds like a decent reason, and I guess it’s sort of possible that I implicitly thought of it at the time of writing”. Another equally plausible secondary reason would be that I was signalling that I wasn’t falling for the potential errors that primarily caused me to write the disclaimer in the first place.
I don’t think so, but you could read the entirety of his blog Black Belt Bayesian, or move to Chicago and try to win his favor at LW meetups by talking about the importance of thinking on the margin, or maybe pay him by the hour to be funny, or something. If I was assembling a team of 9 FAI programmers I’d probably hire Steven Kaas on the grounds that he is obviously somehow necessary.
Accidentally saw an image macro that’s a partial tl;dr of this: http://knowyourmeme.com/photos/211139-scumbag-brain
Yay scumbag brain. To be fair, though, I should admit I’m not exactly the least biased assessor of the prefrontal cortex. http://lesswrong.com/lw/b9/welcome_to_less_wrong/5jht
Agree, I hate that too. When that happens to me I just repeat it in different places until someone finally explains how I am wrong and just accept the karma hit. I have no idea what people are thinking who just downvote. If I knew that I was wrong, or how I was wrong, I wouldn’t have written the comment/post after all.
I think ugh field is the wrong term. A better description would be that he separately brought up a topic that we know from experience ends up being extremely contentious and non-productive, so we try to avoid discussing it. He then regretted doing so and as a result deleted a large chunk of his own posts, including several like this one that were quite insightful. Roko deleting the posts was probably overkill, but there you have it.
Wow, that really gives a distorted picture of what happened.
A better description would be to say that he brought up a topic that some people, including Eliezer Yudkowsky, believe can cause negative effects by virtue of people merely thinking about it.
And Roko himself now. (source: 1 2)