Do you have any actual reason (introspection doesn’t count) to “expect LWers to be the kind of high-NFC/TIE people who try to weigh evidence in a two-sided way before deciding”? I’m not asking if you can fathom or rationalize up a reason, I’m requesting the raw original basis for the assumption.
Your reduced optimism is a recognition within my assessment rather than without it; you agree, but you see deeper properties. Nonsensical arguments are not useful after a certain point, naturally, but where the point lies is a matter we can only determine after assessing each nonsensical idea in turn. We can detect patterns among the space of nonsensical hypotheses, but we’d be neglecting our duty as rationalists and Bayesians alike if we didn’t properly break down each hypothesis in turn to determine its proper weight and quality over the space of measured data. Solomonoff induction is what it is because it takes every possibility into account. Of course if I start off a discussion saying nonsense is useful, you can well predict what the reaction to that will be. It’s useful, to start off, from a state of ignorance. (The default state of all people, LessWrongers included.)
Macroeconomics: Semi-legitimate topic. There is room for severe rational disagreement. Implications for most participants in such discussions is very low, classifying the topic as irrelevant, despite the room for opinion variance.
Feminism: Arguably a legitimate point to contend over. I’ll allow this as evidence in counter to my stance if you can convince me that it was being legitimately argued: Someone would need to legitimately hold the stance of a feminist and not budge in terms of, “Well, I take feminism to mean...” Basically, I don’t really believe this is a point of contention rather than discussion for the generalized LessWrong collective.
HIV & AIDS: Can’t perform assessment. Was anyone actually positing non-consensus ideas in the discussion?
DDT: What’s to discuss? “Should it have been done?” From my understanding this is an issue of the past and thus qualifies as trivial by virtue of being causally disconnected from future actions. Not saying discussing the past isn’t useful, but it’s not exactly boldly adventurous thinking on anyone’s part.
Peak oil: Very legitimate topic. Surprised to hear that it was discussed here. Tell me though, how did the LessWrong collective vote on the comments composing the discussion? Is there a clear split composed of downvotes for comments arguing the dangers of peak oil, and upvotes for the other side? If you wish to argue that wrongness ought to be downvoted, I can address that.
Getting rich to donate to charity: Trivial. Absolutely trivial. This is the kind of LessWrong circlejerking that upsets me the most. It’s never a discussion about how to actually get rich, or what charities to donate to, which problems to solve, who is best qualified to do it, or any such useful discussion. It is always, every time, about whether or not that is the most optimal route. Since nobody is actually going to do anything useful as the result of such discussions, yes, literally, intellectual masturbation.
How to assess the importance of technologies: Semi-legitimate topic. What we need here are theories, new ideas, hypotheses; in spades. LessWrong hates that. New ideas, ideas that stands out, heck, anything less than previously established LessWrong general consensus, is downvoted. You could say LessWrong argues how to assess the importance, but never actually does assess the importance.
Global warming: Fully legitimate topic.
“How much lead exposure harms children’s development:” It’s a bad thing. What’s to argue or discuss? (Not requesting this as a topic, but demonstrating why I don’t think LessWrong’s discussing it is useful in any way.)
Astronomical waste: Same as above.
Global demographic transition: Legitimate, useful even, but trivial in the sense that most of what you’re doing is just looking at the data coming out; I don’t see any real immediate epistemic growth coming out of this.
And I’ve seen broader, nontrivial arguments about developing epistemically rationality, whether at the personal or social level.
Yes, that is the thing which I do credit LessWrong on. The problem is in the rate of advancement; nobody is really getting solid returns on this investment. It’s useful, but not in excess of the average usefulness coming from any other field of study or social process.
People have decided to dedicate years of their lives (and/or thousands of dollars) to attacking the problem of FAI because of their interactions with LW.
I have a strong opinion on this that LessWrong has more or less instructed me to censor. Suffice to say I am personally content with leaving that funding and effort in place.
I dimly recall seeing a lurker post here saying they cured their delusional mental illness by internalizing rationality lessons from the Sequences.
That is intensely interesting and the kind of thing I’d yell at you for not looking more into, let alone remembering only dimly. Events like these are where we’re beginning to detect returns on all this investment. I would immediately hold an interview in response to such a stimulus.
For another thing, there’s a more parsimonious explanation for why some topics are taboo here: because they lead to disproportionately unpleasant & unproductive arguments.
That is, word for word, thought for thought that wrote it, perception for perception that generated the thoughts, the exact basis of the understanding that leads me to make the arguments I am making now.
I think it’s legitimate to [cultivate] LW-specific norms to avoid those topics (or at least damp the powder to minimize the risk of explosion).
This is, primarily, why I do things other than oppose the subject bans. Leaving it banned, leaving it taboo, dampens the powder considerably. This is where I can help, if LessWrong could put up with the fact that I know how to navigate the transition. But of course that’s an extraordinary claim; I’m not allowed to make it. First I have to give evidence that I can do it. Do what? Improve LessWrong on mass scale. Evidence of that? In what form? Should I bring about the Singularity? Should I improve some other (equally resistant) rationalist community? What evidence can I possibly give of my ability to do such a thing? (The last person I asked this question to was unable to divine the answer.)
I’m left with having to argue that I’m on a level where I can manage a community of rationalists. It’s not an argument any LessWronger is going to like very much at all. You’re able to listen to it now because you’re not the average LessWronger. You’re different, and if you’ve properly taken the time to reflect on the opening question of this comment, you’ll know exactly why that is. I’m not telling you this to flatter you (though it is reason to be flattered), but rather because I need to you to be slightly more self-aware in order for you to see the true face of LessWrong that’s hidden behind your assumption that the members of the mass are any bit similar to yourself on an epistemic level. How exactly to utilize that is something I’ve yet to fully ascertain, but it is advanced by this conversation.
LW is mostly an entertainment device for me, albeit one that occasionally stretches my brain a little, like a book of crosswords.
Interesting article, and I’m surprised/relieved/excited to see just how upvoted it’s been. I can say this much: Wanting the last word, wanting to Correct the Internet… These are useful things that advance rationality. Apathy is an even more powerful force than either of those. I know a few ways to use it usefully. You’re part of the solution, but you’re not seeing it yet, because you’re not seeing how far behind the mass really is.
I’d be worried if I thought LWers wanted to “restrain the world”, as you grandiosely put it.
LessWrong is a single point within a growing Singularity. I speak in grandiose terms because the implications of LessWrong’s existence, growth, and path, is itself grand. Politics is one of three memetically spread conversational taboos, outside of LessWrong. LessWrong merely formalized this generational wisdom. As Facebook usage picks up, and the art of internet argument is brought to the masses, we’re seeing an increase in socioeconomic and sociopolitical debate. This is correct, and useful. However, nobody aside from myself and a few others that I’ve met seem to be noticing this. LessWrong itself is going to become generationally memetic. This is correct, and useful. When, exactly, this will happen, is a function primarily of society. What, exactly, LessWrong looks like at that moment in history will offset billions of fates. Little cracks and biases will form cavernous gaps in a civilization’s mindset. This moment in history is far off, so we’re safe for the time being. (If that moment were right now, I would be spending as much of time time as possible working on AGI to crush the resulting leviathan.)
Focusing on this one currently-LessWrong-specific meme, what do you see happening if LW’s memetic moment were right now? Now is LessWrong merely restraining its own members?
Do you have any actual reason (introspection doesn’t count) to “expect LWers to be the kind of high-NFC/TIE people who try to weigh evidence in a two-sided way before deciding”? [...] I’m requesting the raw original basis for the assumption.
LWers self-report having above-average IQs. (One can argue that those numbers are too high, as I’ve done, but those are just arguments about degree.) People with more cognitive firepower to direct at problems are presumably going to do so more often.
LWers self-report above-average AQs. (Again, one might argue those AQs are exaggerated, but the sign of the effect is surely right given LW’s nerdy bent.) This is evidence in favour of LWers being people who tend to automatically apply a fine-grained (if not outright pedantic) and systematic thinking style when confronted with a new person or organization to think about.
Two linked observations. One: a fallacy/heuristic that analytical people often lean on is treating reversed stupidity as intelligence. Two: the political stupidity that an analytical person is likely to find most salient is the stupidity coming from people with firmly held, off-centre political views. Bringing the two together: even before discovering LW, LWers are the kind of analytical types who’d apply the reversed stupidity heuristic to politics, and infer from it that the way to avoid political stupidity is to postpone judgement by trying to look at Both Sides before committing to a political position.
Every time Eliezer writes a new chapter of his HPMoR fanfic, LW’s Discussion section explodes in a frenzy of speculation and attempts to integrate disparate blobs of evidence into predictions about what’s going to happen next, with a zeal most uninterested outside observers might find hard to understand. In line with nerd stereotype, LWers can’t even read a Harry Potter story without itching to poke holes in it.
(Have to dash out of the house now but I’ll comment on the rest soon.)
Nonsensical arguments are not useful after a certain point, naturally, but where the point lies is a matter we can only determine after assessing each nonsensical idea in turn. We can detect patterns among the space of nonsensical hypotheses, but we’d be neglecting our duty as rationalists and Bayesians alike if [...]
I agree with that, read literally, but I disagree with the implied conclusion. Nonsensical arguments hit diminishing (and indeed negative) returns so quickly that in practice they’re nearly useless. (There are situations where this isn’t so, namely educational ones, where having a pupil or student express their muddled understanding makes it possible to correct them. But I don’t think you have that sort of didactic context in mind.)
Feminism: Arguably a legitimate point to contend over. I’ll allow this as evidence in counter to my stance if you can convince me that it was being legitimately argued: Someone would need to legitimately hold the stance of a feminist and not budge in terms of, “Well, I take feminism to mean...” Basically, I don’t really believe this is a point of contention rather than discussion for the generalized LessWrong collective.
Hmm. I tend not to wade into the arguments about feminism so I don’t remember any examples that unambiguously meet your criteria, and some quick Googlesearches don’t give me any either, although you might have more luck. Still, even without evidence on hand sufficient to convince a sceptic, I’m fairly sure feminism, and related issues like pick-up artistry and optimal ways to start romantic relationships, are contentious topics on LW. (In fact I think there’s something approaching a mild norm against gratuitously bringing up those topics because Less Wrong Doesn’t Do Them Well.)
HIV & AIDS: Can’t perform assessment. Was anyone actually positing non-consensus ideas in the discussion?
Peak oil: Very legitimate topic. Surprised to hear that it was discussed here. Tell me though, how did the LessWrong collective vote on the comments composing the discussion? Is there a clear split composed of downvotes for comments arguing the dangers of peak oil, and upvotes for the other side?
Getting rich to donate to charity: Trivial. Absolutely trivial. This is the kind of LessWrong circlejerking that upsets me the most. It’s never a discussion about how to actually get rich, or what charities to donate to, which problems to solve, who is best qualified to do it, or any such useful discussion.
I had hoped that your going through my list of examples point by point would clarify how you were judging which topics were “legitimate” & nontrivial, but I’m still unsure. In some ways it seems like you’re judging topics based on whether they’re things LWers are actually doing something about, but LWers aren’t (as far as I know) doing anything more about global warming or peak oil than they are about astronomical waste or the (insufficient speed of the) global demographic transition. So what makes the former more legit than the latter?
People have decided to dedicate years of their lives (and/or thousands of dollars) to attacking the problem of FAI because of their interactions with LW.
I have a strong opinion on this that LessWrong has more or less instructed me to censor. Suffice to say I am personally content with leaving that funding and effort in place.
The point I meant to make in bringing that up was not that you should cheer people on for dedicating time & money to FAI; it was that people doing so is an existence proof that some LWers are “changing their deep beliefs after ‘seeing the light.’”. If someone goes, “gee, I used to think I should devote my life to philosophy/writing/computer programming/medicine/social work/law, but now I read LW I just want to throw money at MIRI, or fly to California to help out with CFAR”, and then they actually follow through, one can hardly accuse them of not changing their deep beliefs!
That is intensely interesting and the kind of thing I’d yell at you for not looking more into, let alone remembering only dimly. [...] I would immediately hold an interview in response to such a stimulus.
Unless my memory’s playing tricks on me, Eliezer did ask that person to elaborate, but got no response.
This is where I can help, [...] I know how to navigate the transition. But of course that’s an extraordinary claim; I’m not allowed to make it. First I have to give evidence that I can do it. Do what? Improve LessWrong on mass scale. [...] What evidence can I possibly give of my ability to do such a thing? (The last person I asked this question to was unable to divine the answer.)
It seems pretty sensible to me to demand evidence when someone on the fringes of an established community says they’re convinced they know exactly (1) how to singlehandedly overhaul that community, and (2) what to aim for in overhauling it.
I also can’t divine the answer you have in mind, either.
I’m left with having to argue that I’m on a level where I can manage a community of rationalists. It’s not an argument any LessWronger is going to like very much at all. You’re able to listen to it now because you’re not the average LessWronger.
I don’t think you’re making the argument you think you are. The argument I’m hearing is that LW isn’t reaching its full potential because LWers sit around jacking each other off rather than getting shit done. You haven’t actually mounted an argument for your own managerial superiority yet.
You’re different, and if you’ve properly taken the time to reflect on the opening question of this comment, you’ll know exactly why that is. [...] I need to you to be slightly more self-aware in order for you to see the true face of LessWrong that’s hidden behind your assumption that the members of the mass are any bit similar to yourself on an epistemic level.
How about this: I need you to spell out what you mean with this “true face of LessWrong” stuff. (And ideally why you think I’m different & special. The only evidence you’ve cited so far is that I’ve bothered to argue with you!) I doubt I’m nearly as astute as you think I am, not least because I can’t discern what you’re saying when you start laying on the gnomic flattery.
LessWrong is a single point within a growing Singularity. [Rest of paragraph snipped.]
My own hunch: LW will carry on being a reasonable but not spectacular success for MIRI. It’ll continue serving as a pipeline of potential donors to (and workers for) MIRI & CFAR, growing steadily but not astoundingly for another decade or so until it basically runs its course.
Focusing on this one currently-LessWrong-specific meme, what do you see happening if LW’s memetic moment were right now? Now is LessWrong merely restraining its own members?
OK, yes, if the LW memeplex went viral and imprinted itself on the minds of an entire generation, then by definition it’d be silly for me to airily say, “oh, that’s just an LW-specific meme, nothing to worry about”. But I don’t worry about that risk much for two reasons: the outside view says LW most likely won’t be that successful; and people love to argue politics, and are likely to argue politics even if most of them end up believing in (and overinterpreting) “Politics is the Mindkiller”. Little political scuffles still break out here, don’t they?
There are situations where this isn’t so, namely educational ones, where having a pupil or student express their muddled understanding makes it possible to correct them. But I don’t think you have that sort of didactic context in mind.
I do, actually, which raises the question as to why you think I didn’t have that in mind. Did you not realize that LessWrong and pretty much our entire world civilization is in such a didactic state? Moreover, if we weren’t in such a didactic state, why does LessWrong exist? Does the art of human rationality not have vast room to improve? This honestly seems like a highly contradictory stance, so I hope I’m not attacking a straw man.
This would appear to be false.
So it would. Thank you for taking the time to track down those articles. As always, it’s given me a few new ideas about how to work with LessWrong.
LWers aren’t (as far as I know) doing anything more about global warming or peak oil than they are about astronomical waste or the (insufficient speed of) the global demographic transition. So what makes the former more legit than the latter?
I was using a rough estimate for legitimacy; I really just want LessWrong to be more of an active force in the world. There are topics and discussions that further this process and there are topics and discussion that simply do not. Similarly, there are topics and discussions where you can pretend you’re disagreeing, but not really honing your rationality in any way by participating. For reference, this conversation isn’t honing our rationality very well; we’re already pretty finely tuned. What’s happening between us now is current-optimum information exchange. I’m providing you with tangible structural components, and you’ve providing me with excellent calibration data.
If someone goes, “gee, I used to think I should devote my life to philosophy/writing/computer programming/medicine/social work/law, but now I read LW I just want to throw money at MIRI, or fly to California to help out with CFAR”, and then they actually follow through, one can hardly accuse them of not changing their deep beliefs!
Oh but that is very much exactly what I can do!
In each and every one of those cases you will find that the person had not spent sufficient time reflecting on the usefulness of thought and refined reasoning, or else uFAI and existential risks. The state these ideas existed in their mind in was not a “deep belief” state, but rather a relatively blank slate primed to receive the first idea that came to mind. uFAI is not a high-class danger; EY is wrong, and the funding and effort is, in large part, illegitimate. I am personally content leaving that fear, effort and funding in place precisely because I can milk it for my own personal benefit. Does every such person who reads the sequences run off to donate or start having nightmares about FAI punishing them for not donating? Absolutely, positively; this is not the case.
Deep beliefs are an entirely different class of psychological construct entirely. Imagine I am very much of the belief that AI cannot be created because there’s something fundamental in the organic brain that a machine cannot replicate. What will reading every AI-relevant article in the sequences get me? Will my deep (and irrational) beliefs be overridden and replaced with AI existential fear? It is very difficult for me to assume you’ll do anything but agree that such things do not happen, but I must leave open the possibly that you’ll see something that I missed. This is a relatively strong belief of mine, but unlike most others, I will never close myself off to new ideas. I am very much of the intention that child-like plasticity can be maintained so long as I do not make the conscious decision to close myself off and pretend I know more than I actually do.
Eliezer did ask that person to elaborate, but got no response.
Ah. No harm, no foul, then.
You haven’t actually mounted an argument for your own managerial superiority yet.
I’ve been masking heavily. To be honest, my ideas were embedded many replies ago. I’m only responding now insofar as seeing what all you have to offer, what level you’re at, and what levels and subjects you’re overtly receptive to. (And on the off-chance, picking up an observer or two.)
I need to you to be slightly more self-aware in order [...]
How about this: I need you to spell out what you mean with this “true face of LessWrong” stuff. (And ideally why you think I’m different & special.)
“Self-aware” is a non-trivial aspect here. It’s not something I can communicate simply by asserting it, because you can only trust the assertion so much, especially given that the assertion is about you. Among other things, I’m measuring the rate at which you come to realizations. “If you’ve properly taken the time to reflect on the opening question of this comment,” is more than enough of a clue. That you haven’t put the reflection in simply from my cluing gives me a very detailed picture of how much you currently trust my judgment. I actually thought it was pretty awesome that you responded to the opening question in an isolated reply and had to rush out right after answering it, giving you very much more time to have reflected on it than the case of serially reading and replying without expending too much mental effort in doing so. I’m really not here to convince you of my societal/managerial competence by direct demonstration; this is just gathering critical calibration data on my part.
I’ve already spelled it out pretty damn concisely. Recognizing the differences between yourself and the people you like to think are very much like you is uniquely up to you.
My own hunch:
Yeah, pretty much. LessWrong’s memetic moment in history isn’t necessarily at a point in time at which it is active. That’s sort of the premise of the concern of LessWrong’s immediate memeplex going viral. As the population’s intelligence slowly increases, it’ll eventually hit a sweet spot where LessWrong’s content will resonate with it.
...But yeah, ban on politics isn’t one of the dangerous LessWrong memes.
I do, actually, which raises the question as to why you think I didn’t have that in mind. Did you not realize that LessWrong and pretty much our entire world civilization is in such a didactic state?
I did not. And do not, in fact. Those didactic states are states where there’s someone who’s clearly the teacher (primarily interested in passing on knowledge), and someone who’s clearly the pupil (or pupils plural — but however many, the pupil(s) are well aware they’re not the teacher). But on LW and most other places where grown-ups discuss things, things don’t run so much on a teacher-student model; it’s mostly peers arguing with each other on a roughly even footing, and in a lot of those arguments, nobody’s thinking of themselves as the pupil. Even though people are still learning from each other in such situations, they’re not what I had in mind as “didactic”.
In hindsight I should’ve used the word “pedagogical” rather than “didactic”.
Moreover, if we weren’t in such a didactic state, why does LessWrong exist? Does the art of human rationality not have vast room to improve?
I think these questions are driven by misunderstandings of what I meant by “didactic context”. What I wrote above might clarify.
This would appear to be false.
So it would. Thank you for taking the time to track down those articles.
Thank you for updating in the face of evidence.
I was using a rough estimate for legitimacy; I really just want LessWrong to be more of an active force in the world.
Fair enough.
In each and every one of those cases you will find that the person had not spent sufficient time reflecting on the usefulness of thought and refined reasoning, or else uFAI and existential risks. The state these ideas existed in their mind in was not a “deep belief” state, but rather a relatively blank slate primed to receive the first idea that came to mind.
I interpreted “deep beliefs” as referring to beliefs that matter enough to affect the believer’s behaviour. Under that interpretation, any new belief that leads to a major, consistent change in someone’s behaviour (e.g. changing jobs to donate thousands to MIRI) would seem to imply a change in deep beliefs. You evidently have a different meaning of “deep belief” in mind but I still don’t know what (even after reading that paragraph and the one after it).
“Self-aware” is a non-trivial aspect here. It’s not something I can communicate simply by asserting it, because you can only trust the assertion so much, especially given that the assertion is about you. [...] “If you’ve properly taken the time to reflect on the opening question of this comment,” is more than enough of a clue. [...] I’m really not here to convince you of my societal/managerial competence by direct demonstration; this is just gathering critical calibration data on my part.
I’ve already spelled it out pretty damn concisely. Recognizing the differences between yourself and the people you like to think are very much like you is uniquely up to you.
Hrmm. Well, that wraps up that branch of the conversation quite tidily.
LessWrong’s memetic moment in history isn’t necessarily at a point in time at which it is active.
I suppose that’s true...
That’s sort of the premise of the concern of LessWrong’s immediate memeplex going viral. As the population’s intelligence slowly increases, it’ll eventually hit a sweet spot where LessWrong’s content will resonate with it.
...but I’d still soften that “will” to a “might, someday, conceivably”. Things don’t go viral in so predictable a fashion. (And even when they do, they often go viral as short-term fads.)
Another reason I’m not too worried: the downsides of LW memes invading everyone’s head would be relatively small. People believe all sorts of screamingly irrational and generally worse things already.
Do you have any actual reason (introspection doesn’t count) to “expect LWers to be the kind of high-NFC/TIE people who try to weigh evidence in a two-sided way before deciding”? I’m not asking if you can fathom or rationalize up a reason, I’m requesting the raw original basis for the assumption.
Your reduced optimism is a recognition within my assessment rather than without it; you agree, but you see deeper properties. Nonsensical arguments are not useful after a certain point, naturally, but where the point lies is a matter we can only determine after assessing each nonsensical idea in turn. We can detect patterns among the space of nonsensical hypotheses, but we’d be neglecting our duty as rationalists and Bayesians alike if we didn’t properly break down each hypothesis in turn to determine its proper weight and quality over the space of measured data. Solomonoff induction is what it is because it takes every possibility into account. Of course if I start off a discussion saying nonsense is useful, you can well predict what the reaction to that will be. It’s useful, to start off, from a state of ignorance. (The default state of all people, LessWrongers included.)
Macroeconomics: Semi-legitimate topic. There is room for severe rational disagreement. Implications for most participants in such discussions is very low, classifying the topic as irrelevant, despite the room for opinion variance.
Feminism: Arguably a legitimate point to contend over. I’ll allow this as evidence in counter to my stance if you can convince me that it was being legitimately argued: Someone would need to legitimately hold the stance of a feminist and not budge in terms of, “Well, I take feminism to mean...” Basically, I don’t really believe this is a point of contention rather than discussion for the generalized LessWrong collective.
HIV & AIDS: Can’t perform assessment. Was anyone actually positing non-consensus ideas in the discussion?
DDT: What’s to discuss? “Should it have been done?” From my understanding this is an issue of the past and thus qualifies as trivial by virtue of being causally disconnected from future actions. Not saying discussing the past isn’t useful, but it’s not exactly boldly adventurous thinking on anyone’s part.
Peak oil: Very legitimate topic. Surprised to hear that it was discussed here. Tell me though, how did the LessWrong collective vote on the comments composing the discussion? Is there a clear split composed of downvotes for comments arguing the dangers of peak oil, and upvotes for the other side? If you wish to argue that wrongness ought to be downvoted, I can address that.
Getting rich to donate to charity: Trivial. Absolutely trivial. This is the kind of LessWrong circlejerking that upsets me the most. It’s never a discussion about how to actually get rich, or what charities to donate to, which problems to solve, who is best qualified to do it, or any such useful discussion. It is always, every time, about whether or not that is the most optimal route. Since nobody is actually going to do anything useful as the result of such discussions, yes, literally, intellectual masturbation.
How to assess the importance of technologies: Semi-legitimate topic. What we need here are theories, new ideas, hypotheses; in spades. LessWrong hates that. New ideas, ideas that stands out, heck, anything less than previously established LessWrong general consensus, is downvoted. You could say LessWrong argues how to assess the importance, but never actually does assess the importance.
Global warming: Fully legitimate topic.
“How much lead exposure harms children’s development:” It’s a bad thing. What’s to argue or discuss? (Not requesting this as a topic, but demonstrating why I don’t think LessWrong’s discussing it is useful in any way.)
Astronomical waste: Same as above.
Global demographic transition: Legitimate, useful even, but trivial in the sense that most of what you’re doing is just looking at the data coming out; I don’t see any real immediate epistemic growth coming out of this.
Yes, that is the thing which I do credit LessWrong on. The problem is in the rate of advancement; nobody is really getting solid returns on this investment. It’s useful, but not in excess of the average usefulness coming from any other field of study or social process.
I have a strong opinion on this that LessWrong has more or less instructed me to censor. Suffice to say I am personally content with leaving that funding and effort in place.
That is intensely interesting and the kind of thing I’d yell at you for not looking more into, let alone remembering only dimly. Events like these are where we’re beginning to detect returns on all this investment. I would immediately hold an interview in response to such a stimulus.
That is, word for word, thought for thought that wrote it, perception for perception that generated the thoughts, the exact basis of the understanding that leads me to make the arguments I am making now.
This is, primarily, why I do things other than oppose the subject bans. Leaving it banned, leaving it taboo, dampens the powder considerably. This is where I can help, if LessWrong could put up with the fact that I know how to navigate the transition. But of course that’s an extraordinary claim; I’m not allowed to make it. First I have to give evidence that I can do it. Do what? Improve LessWrong on mass scale. Evidence of that? In what form? Should I bring about the Singularity? Should I improve some other (equally resistant) rationalist community? What evidence can I possibly give of my ability to do such a thing? (The last person I asked this question to was unable to divine the answer.)
I’m left with having to argue that I’m on a level where I can manage a community of rationalists. It’s not an argument any LessWronger is going to like very much at all. You’re able to listen to it now because you’re not the average LessWronger. You’re different, and if you’ve properly taken the time to reflect on the opening question of this comment, you’ll know exactly why that is. I’m not telling you this to flatter you (though it is reason to be flattered), but rather because I need to you to be slightly more self-aware in order for you to see the true face of LessWrong that’s hidden behind your assumption that the members of the mass are any bit similar to yourself on an epistemic level. How exactly to utilize that is something I’ve yet to fully ascertain, but it is advanced by this conversation.
Interesting article, and I’m surprised/relieved/excited to see just how upvoted it’s been. I can say this much: Wanting the last word, wanting to Correct the Internet… These are useful things that advance rationality. Apathy is an even more powerful force than either of those. I know a few ways to use it usefully. You’re part of the solution, but you’re not seeing it yet, because you’re not seeing how far behind the mass really is.
LessWrong is a single point within a growing Singularity. I speak in grandiose terms because the implications of LessWrong’s existence, growth, and path, is itself grand. Politics is one of three memetically spread conversational taboos, outside of LessWrong. LessWrong merely formalized this generational wisdom. As Facebook usage picks up, and the art of internet argument is brought to the masses, we’re seeing an increase in socioeconomic and sociopolitical debate. This is correct, and useful. However, nobody aside from myself and a few others that I’ve met seem to be noticing this. LessWrong itself is going to become generationally memetic. This is correct, and useful. When, exactly, this will happen, is a function primarily of society. What, exactly, LessWrong looks like at that moment in history will offset billions of fates. Little cracks and biases will form cavernous gaps in a civilization’s mindset. This moment in history is far off, so we’re safe for the time being. (If that moment were right now, I would be spending as much of time time as possible working on AGI to crush the resulting leviathan.)
Focusing on this one currently-LessWrong-specific meme, what do you see happening if LW’s memetic moment were right now? Now is LessWrong merely restraining its own members?
[Comment length reached, continuing...]
LWers self-report having above-average IQs. (One can argue that those numbers are too high, as I’ve done, but those are just arguments about degree.) People with more cognitive firepower to direct at problems are presumably going to do so more often.
LWers self-report above-average AQs. (Again, one might argue those AQs are exaggerated, but the sign of the effect is surely right given LW’s nerdy bent.) This is evidence in favour of LWers being people who tend to automatically apply a fine-grained (if not outright pedantic) and systematic thinking style when confronted with a new person or organization to think about.
Two linked observations. One: a fallacy/heuristic that analytical people often lean on is treating reversed stupidity as intelligence. Two: the political stupidity that an analytical person is likely to find most salient is the stupidity coming from people with firmly held, off-centre political views. Bringing the two together: even before discovering LW, LWers are the kind of analytical types who’d apply the reversed stupidity heuristic to politics, and infer from it that the way to avoid political stupidity is to postpone judgement by trying to look at Both Sides before committing to a political position.
Every time Eliezer writes a new chapter of his HPMoR fanfic, LW’s Discussion section explodes in a frenzy of speculation and attempts to integrate disparate blobs of evidence into predictions about what’s going to happen next, with a zeal most uninterested outside observers might find hard to understand. In line with nerd stereotype, LWers can’t even read a Harry Potter story without itching to poke holes in it.
(Have to dash out of the house now but I’ll comment on the rest soon.)
I agree with that, read literally, but I disagree with the implied conclusion. Nonsensical arguments hit diminishing (and indeed negative) returns so quickly that in practice they’re nearly useless. (There are situations where this isn’t so, namely educational ones, where having a pupil or student express their muddled understanding makes it possible to correct them. But I don’t think you have that sort of didactic context in mind.)
Hmm. I tend not to wade into the arguments about feminism so I don’t remember any examples that unambiguously meet your criteria, and some quick Google searches don’t give me any either, although you might have more luck. Still, even without evidence on hand sufficient to convince a sceptic, I’m fairly sure feminism, and related issues like pick-up artistry and optimal ways to start romantic relationships, are contentious topics on LW. (In fact I think there’s something approaching a mild norm against gratuitously bringing up those topics because Less Wrong Doesn’t Do Them Well.)
Yep. The person I ended up arguing with was saying that HIV isn’t an STD, that seroconversion isn’t indicative of HIV infection, and that there’s not much reason to think microscopic pictures of HIV are actually of HIV. (They started by saying they had 70% confidence “that the mainstream theory of HIV/AIDS is solid”, but what they wrote as the thread unfolded made clear that their effective degree of confidence was really much less.)
Here’s the discussion I had in mind.
I quickly skimmed the conversation I was thinking of and didn’t see a clear split. But you can judge for yourself.
Here’s a post on deciding which charities to donate to. Here’s a student asking how they can get rich for effective altruism. Here’s a detailed walkthrough of how to maximize the cash you get when searching for a programming job. Here’s someone asking straightforwardly how they can make money. Here’s Julia Wise wondering which career would allow her to donate the most money.
This would appear to be false.
Whether it affects children’s development to such a degree that it can explain future variations in violent crime levels.
I had hoped that your going through my list of examples point by point would clarify how you were judging which topics were “legitimate” & nontrivial, but I’m still unsure. In some ways it seems like you’re judging topics based on whether they’re things LWers are actually doing something about, but LWers aren’t (as far as I know) doing anything more about global warming or peak oil than they are about astronomical waste or the (insufficient speed of the) global demographic transition. So what makes the former more legit than the latter?
The point I meant to make in bringing that up was not that you should cheer people on for dedicating time & money to FAI; it was that people doing so is an existence proof that some LWers are “changing their deep beliefs after ‘seeing the light.’”. If someone goes, “gee, I used to think I should devote my life to philosophy/writing/computer programming/medicine/social work/law, but now I read LW I just want to throw money at MIRI, or fly to California to help out with CFAR”, and then they actually follow through, one can hardly accuse them of not changing their deep beliefs!
Unless my memory’s playing tricks on me, Eliezer did ask that person to elaborate, but got no response.
It seems pretty sensible to me to demand evidence when someone on the fringes of an established community says they’re convinced they know exactly (1) how to singlehandedly overhaul that community, and (2) what to aim for in overhauling it.
I also can’t divine the answer you have in mind, either.
I don’t think you’re making the argument you think you are. The argument I’m hearing is that LW isn’t reaching its full potential because LWers sit around jacking each other off rather than getting shit done. You haven’t actually mounted an argument for your own managerial superiority yet.
How about this: I need you to spell out what you mean with this “true face of LessWrong” stuff. (And ideally why you think I’m different & special. The only evidence you’ve cited so far is that I’ve bothered to argue with you!) I doubt I’m nearly as astute as you think I am, not least because I can’t discern what you’re saying when you start laying on the gnomic flattery.
My own hunch: LW will carry on being a reasonable but not spectacular success for MIRI. It’ll continue serving as a pipeline of potential donors to (and workers for) MIRI & CFAR, growing steadily but not astoundingly for another decade or so until it basically runs its course.
OK, yes, if the LW memeplex went viral and imprinted itself on the minds of an entire generation, then by definition it’d be silly for me to airily say, “oh, that’s just an LW-specific meme, nothing to worry about”. But I don’t worry about that risk much for two reasons: the outside view says LW most likely won’t be that successful; and people love to argue politics, and are likely to argue politics even if most of them end up believing in (and overinterpreting) “Politics is the Mindkiller”. Little political scuffles still break out here, don’t they?
I do, actually, which raises the question as to why you think I didn’t have that in mind. Did you not realize that LessWrong and pretty much our entire world civilization is in such a didactic state? Moreover, if we weren’t in such a didactic state, why does LessWrong exist? Does the art of human rationality not have vast room to improve? This honestly seems like a highly contradictory stance, so I hope I’m not attacking a straw man.
So it would. Thank you for taking the time to track down those articles. As always, it’s given me a few new ideas about how to work with LessWrong.
I was using a rough estimate for legitimacy; I really just want LessWrong to be more of an active force in the world. There are topics and discussions that further this process and there are topics and discussion that simply do not. Similarly, there are topics and discussions where you can pretend you’re disagreeing, but not really honing your rationality in any way by participating. For reference, this conversation isn’t honing our rationality very well; we’re already pretty finely tuned. What’s happening between us now is current-optimum information exchange. I’m providing you with tangible structural components, and you’ve providing me with excellent calibration data.
Oh but that is very much exactly what I can do!
In each and every one of those cases you will find that the person had not spent sufficient time reflecting on the usefulness of thought and refined reasoning, or else uFAI and existential risks. The state these ideas existed in their mind in was not a “deep belief” state, but rather a relatively blank slate primed to receive the first idea that came to mind. uFAI is not a high-class danger; EY is wrong, and the funding and effort is, in large part, illegitimate. I am personally content leaving that fear, effort and funding in place precisely because I can milk it for my own personal benefit. Does every such person who reads the sequences run off to donate or start having nightmares about FAI punishing them for not donating? Absolutely, positively; this is not the case.
Deep beliefs are an entirely different class of psychological construct entirely. Imagine I am very much of the belief that AI cannot be created because there’s something fundamental in the organic brain that a machine cannot replicate. What will reading every AI-relevant article in the sequences get me? Will my deep (and irrational) beliefs be overridden and replaced with AI existential fear? It is very difficult for me to assume you’ll do anything but agree that such things do not happen, but I must leave open the possibly that you’ll see something that I missed. This is a relatively strong belief of mine, but unlike most others, I will never close myself off to new ideas. I am very much of the intention that child-like plasticity can be maintained so long as I do not make the conscious decision to close myself off and pretend I know more than I actually do.
Ah. No harm, no foul, then.
I’ve been masking heavily. To be honest, my ideas were embedded many replies ago. I’m only responding now insofar as seeing what all you have to offer, what level you’re at, and what levels and subjects you’re overtly receptive to. (And on the off-chance, picking up an observer or two.)
“Self-aware” is a non-trivial aspect here. It’s not something I can communicate simply by asserting it, because you can only trust the assertion so much, especially given that the assertion is about you. Among other things, I’m measuring the rate at which you come to realizations. “If you’ve properly taken the time to reflect on the opening question of this comment,” is more than enough of a clue. That you haven’t put the reflection in simply from my cluing gives me a very detailed picture of how much you currently trust my judgment. I actually thought it was pretty awesome that you responded to the opening question in an isolated reply and had to rush out right after answering it, giving you very much more time to have reflected on it than the case of serially reading and replying without expending too much mental effort in doing so. I’m really not here to convince you of my societal/managerial competence by direct demonstration; this is just gathering critical calibration data on my part.
I’ve already spelled it out pretty damn concisely. Recognizing the differences between yourself and the people you like to think are very much like you is uniquely up to you.
Yeah, pretty much. LessWrong’s memetic moment in history isn’t necessarily at a point in time at which it is active. That’s sort of the premise of the concern of LessWrong’s immediate memeplex going viral. As the population’s intelligence slowly increases, it’ll eventually hit a sweet spot where LessWrong’s content will resonate with it.
...But yeah, ban on politics isn’t one of the dangerous LessWrong memes.
I did not. And do not, in fact. Those didactic states are states where there’s someone who’s clearly the teacher (primarily interested in passing on knowledge), and someone who’s clearly the pupil (or pupils plural — but however many, the pupil(s) are well aware they’re not the teacher). But on LW and most other places where grown-ups discuss things, things don’t run so much on a teacher-student model; it’s mostly peers arguing with each other on a roughly even footing, and in a lot of those arguments, nobody’s thinking of themselves as the pupil. Even though people are still learning from each other in such situations, they’re not what I had in mind as “didactic”.
In hindsight I should’ve used the word “pedagogical” rather than “didactic”.
I think these questions are driven by misunderstandings of what I meant by “didactic context”. What I wrote above might clarify.
Thank you for updating in the face of evidence.
Fair enough.
I interpreted “deep beliefs” as referring to beliefs that matter enough to affect the believer’s behaviour. Under that interpretation, any new belief that leads to a major, consistent change in someone’s behaviour (e.g. changing jobs to donate thousands to MIRI) would seem to imply a change in deep beliefs. You evidently have a different meaning of “deep belief” in mind but I still don’t know what (even after reading that paragraph and the one after it).
Hrmm. Well, that wraps up that branch of the conversation quite tidily.
I suppose that’s true...
...but I’d still soften that “will” to a “might, someday, conceivably”. Things don’t go viral in so predictable a fashion. (And even when they do, they often go viral as short-term fads.)
Another reason I’m not too worried: the downsides of LW memes invading everyone’s head would be relatively small. People believe all sorts of screamingly irrational and generally worse things already.