Agreed that LW is in a kind of stagnation. However, I think that just someone writing a series of high-quality posts would suffice to fix it. Now, the amount of discussion in comments is quite good, the problem is that there aren’t many interesting posts.
If a group said that they thought A was an important issue and the solution was X, most members would pay more attention than if a random individual said it. No-one would have to listen to anything they say, but I imagine that many would choose to. Furthermore if the exec were all actively involved in the projects, I imagine they’d be able to complete some themselves, especially if they choose smaller ones.
It isn’t quite a good thing; many people noticed that LW is somewhat like Eliezer’s echo chamber. Actually, we should endorse high-quality opinions different from LW mainstream.
What are your heuristics for telling whether posts/comments contain “high-quality opinions,” or “LW mainstream”? Also, what did you think of Loosemore’s recent post on fallacies in AI predictions?
It’s just my impression; I don’t claim that it is precise.
As for the recent post by Loosemore, I think that it is sane and well-written, and clearly required a substantial amount of analysis and thinking to write. I consider it a central example of high-quality non-LW-mainstream posts.
Having said that, I mostly disagree with its conclusions. All the reasoning there is based on the assumption that the AGI will be logic-based (CLAI, following the post’s terminology), which I find unlikely. I’m 95% certain that if the AGI is going to be built anytime soon, it will be based on machine learning; anyway, the claim that CLAI is “the only meaningful class of AI worth discussing” is far from being true.
I’m not sure how it relates to proposed stagnation (i.e loss of momentum) of LW community.
Could you please elaborate?
I understand affective death spirals to mean something completely different I am totally confused.
It’s quite easy (and in fact almost inevitable) to get carried away with a theory (as in a bunch of axiomatic ideas together with a logical framework) you have. “As the theory seems truer, you will be more likely to question evidence that conflicts with it. As the favored theory seems more general, you will seek to use it in more explanations.” Thus you will cease to question the theory and cease to truly go beyond it, leading to stagnation.
The idea that you can actually optimize your thought processes using deliberate rational will and analysis of biases, as exemplified by the home page, and specifically the extreme version of this idea that someuserstry to adopt.
I can try to elaborate on the criticisms of the pages I linked. There hasn’t been any study of the long-term effects of spaced repetition. There are indications that it may be counter-productive and that it may act as an artifical ‘importance inflator’ of information, desensitizing the brain’s long-term response to new knowledge that is actually important, especially if one is not consciously aware of that.
About the pomodoro technique, it’s even less researched than spaced repetition and there’s very little solid evidence that it works. One thing that seems a bit worrying is that it seems like a ‘desperate measure’ adopted by people experiencing low productivity, indicating some other problem (depression/burnout etc.) that should be dealt with directly. In these cases pomodoros would make things far worse.
It could be said that none of these are criticisms of LW, but just criticisms of these specific techniques that arose outside of LW. However, if one is too eager to adopt and believe in such techniques, it betrays ADS-type thinking as relating to the idea that optimization of thought processes can be done through ‘productivity hacks’.
TDT, FAI (esp. CEV), acausal trading, MWI—regardless whether they are true or not, the level of criticism is lower than one would expect; either because of the Halo effect or ADS.
I see these things being discussed here from time to time. I don’t see any general booming of them, still less any increasing trend. Eliezer, of course, has boomed MWI quite strongly; but he is no longer here.
My impression is that inside LW they are usually assumed true, while outside LW they are usually assumed false or highly questionable. Again, I’m not saying that these theories are wrong, but the pattern looks suspicious; almost every LW’s non-mainstream belief can be traced back to Eliezer. What a coincidence. One of the possible explanations is the halo effect of the Sequences. Or they are actually underrated outside LW. Or my impressions are distorted.
Take MWI for example; apparently a lot of people are under the impression that LWers must be ~100% MWI fanatics. But the annual surveys report that lukewarm endorsements of MWI as the least bad QM interpretation covers, what, <50% of respondents? And it’s not clear to me that LW is even different from mainstream physicists, since the occasional polls of them show MWI keeps becoming more popular. It seems like people overgeneralize from the generally respectful treatment of MWI as a valid alternative (as opposed to early criticism of it as nonsense or crackpot pseudoscience) and from MWI topics being a lot more fun to discuss than, say, Copenhagen.
Or, global pandemics are regularly rated in the survey as a very concerning x-risk up there with AI, but are discussed much less; possibly because the risk of pandemics seems well-appreciated by society at large and there’s little new to discuss.
Similarly for some of the other stereotypical beliefs; critics like Stross and XiXiDu have been campaigning to turn Roko’s basilisk into the defining shibboleth of LW, but do even <5% of LWers take it seriously or as more than an obscure hypothetical in one superseded decision theory? (I don’t think so but in that case I can’t prove it with survey data.)
And with TDT and acausal trading, they’re technical and difficult enough, relying heavily on formal logic and decision theory, that it’s hard to make any comments on them at all, either pro or con. Personally, I don’t believe in acausal trading. But I also don’t ever come out and talk about it, because I don’t feel I understand it or UDT/TDT well, am not particularly interested in them, and have nothing new to contribute to conversations about them; so why would I write about them, and if I were writing about them, why would you or anyone want to read what I wrote?
Agreed that LW is in a kind of stagnation. However, I think that just someone writing a series of high-quality posts would suffice to fix it. Now, the amount of discussion in comments is quite good, the problem is that there aren’t many interesting posts.
It isn’t quite a good thing; many people noticed that LW is somewhat like Eliezer’s echo chamber. Actually, we should endorse high-quality opinions different from LW mainstream.
What are your heuristics for telling whether posts/comments contain “high-quality opinions,” or “LW mainstream”? Also, what did you think of Loosemore’s recent post on fallacies in AI predictions?
It’s just my impression; I don’t claim that it is precise.
As for the recent post by Loosemore, I think that it is sane and well-written, and clearly required a substantial amount of analysis and thinking to write. I consider it a central example of high-quality non-LW-mainstream posts.
Having said that, I mostly disagree with its conclusions. All the reasoning there is based on the assumption that the AGI will be logic-based (CLAI, following the post’s terminology), which I find unlikely. I’m 95% certain that if the AGI is going to be built anytime soon, it will be based on machine learning; anyway, the claim that CLAI is “the only meaningful class of AI worth discussing” is far from being true.
I think LW might actually be suffering from something like a collective affective death spiral.
I’m not sure how it relates to proposed stagnation (i.e loss of momentum) of LW community. Could you please elaborate? I understand affective death spirals to mean something completely different I am totally confused.
It’s quite easy (and in fact almost inevitable) to get carried away with a theory (as in a bunch of axiomatic ideas together with a logical framework) you have. “As the theory seems truer, you will be more likely to question evidence that conflicts with it. As the favored theory seems more general, you will seek to use it in more explanations.” Thus you will cease to question the theory and cease to truly go beyond it, leading to stagnation.
What is the theory that you think LW has such a spiral around?
The idea that you can actually optimize your thought processes using deliberate rational will and analysis of biases, as exemplified by the home page, and specifically the extreme version of this idea that some users try to adopt.
Can you unpack “optimizing thought processes”? Under some definitions the statement is questionable, under others trivially true.
Also, the articles you’ve linked to describe techniques that are very popular outside—so if they are overrated, it isn’t a LW-specific mistake.
I can try to elaborate on the criticisms of the pages I linked. There hasn’t been any study of the long-term effects of spaced repetition. There are indications that it may be counter-productive and that it may act as an artifical ‘importance inflator’ of information, desensitizing the brain’s long-term response to new knowledge that is actually important, especially if one is not consciously aware of that.
About the pomodoro technique, it’s even less researched than spaced repetition and there’s very little solid evidence that it works. One thing that seems a bit worrying is that it seems like a ‘desperate measure’ adopted by people experiencing low productivity, indicating some other problem (depression/burnout etc.) that should be dealt with directly. In these cases pomodoros would make things far worse.
It could be said that none of these are criticisms of LW, but just criticisms of these specific techniques that arose outside of LW. However, if one is too eager to adopt and believe in such techniques, it betrays ADS-type thinking as relating to the idea that optimization of thought processes can be done through ‘productivity hacks’.
How are you distinguishing an affective death spiral from people thinking that something is a good idea?
People using Anki and Pomodoros (neither of which were invented on LW or by LWers) doesn’t look extreme to me.
TDT, FAI (esp. CEV), acausal trading, MWI—regardless whether they are true or not, the level of criticism is lower than one would expect; either because of the Halo effect or ADS.
I see these things being discussed here from time to time. I don’t see any general booming of them, still less any increasing trend. Eliezer, of course, has boomed MWI quite strongly; but he is no longer here.
My impression is that inside LW they are usually assumed true, while outside LW they are usually assumed false or highly questionable. Again, I’m not saying that these theories are wrong, but the pattern looks suspicious; almost every LW’s non-mainstream belief can be traced back to Eliezer. What a coincidence. One of the possible explanations is the halo effect of the Sequences. Or they are actually underrated outside LW. Or my impressions are distorted.
I’m going with distorted.
Take MWI for example; apparently a lot of people are under the impression that LWers must be ~100% MWI fanatics. But the annual surveys report that lukewarm endorsements of MWI as the least bad QM interpretation covers, what, <50% of respondents? And it’s not clear to me that LW is even different from mainstream physicists, since the occasional polls of them show MWI keeps becoming more popular. It seems like people overgeneralize from the generally respectful treatment of MWI as a valid alternative (as opposed to early criticism of it as nonsense or crackpot pseudoscience) and from MWI topics being a lot more fun to discuss than, say, Copenhagen.
Or, global pandemics are regularly rated in the survey as a very concerning x-risk up there with AI, but are discussed much less; possibly because the risk of pandemics seems well-appreciated by society at large and there’s little new to discuss.
Similarly for some of the other stereotypical beliefs; critics like Stross and XiXiDu have been campaigning to turn Roko’s basilisk into the defining shibboleth of LW, but do even <5% of LWers take it seriously or as more than an obscure hypothetical in one superseded decision theory? (I don’t think so but in that case I can’t prove it with survey data.)
And with TDT and acausal trading, they’re technical and difficult enough, relying heavily on formal logic and decision theory, that it’s hard to make any comments on them at all, either pro or con. Personally, I don’t believe in acausal trading. But I also don’t ever come out and talk about it, because I don’t feel I understand it or UDT/TDT well, am not particularly interested in them, and have nothing new to contribute to conversations about them; so why would I write about them, and if I were writing about them, why would you or anyone want to read what I wrote?