I have significantly decreased my participation on LW discussions recently, partly for reasons unrelated to whatever is going on here, but I have few issues with the present state of this site and perhaps they are relevant:
LW seems to be slowly becoming self-obsessed. “How do we get better contrarians?” “What should be our debate policies?” “Should discussing politics be banned on LW?” “Is LW a phyg?” “Shouldn’t LW become more of a phyg?” Damn. I am not interested in endless meta-debates about community building. Meta debates could be fine, but only if they are rare—else I feel I am losing purposes. Object-level topics should form an overwhelming majority both in the main section and in the discussion.
Too narrow set of topics. Somewhat ironically the explicitly forbidden politics is debated quite frequently, but many potentially interesting areas of inquiry are left out completely. You post a question about calculus in the discussion section and get downvoted, since it is “off topic”—ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted. But there is only so much one can say about AI and ethics and Bayesian epistemology and self-improvement on a level accessible to general internet audience. When I discovered Overcoming Bias (whose half later evolved into LW), it was overflowing with revolutionary and inspiring (from my point of view) ideas. Now I feel saturated as majority of new articles seem to be devoid of new insights (again from my point of view).
If you are afraid that LW could devolve into a dogmatic narrow community without enough contrarians to maintain high level of epistemic hygiene, don’t try to spawn new contrarians by methods of social engineering. Instead try to encourage debates on diverse set of topics, mainly those which haven’t been addressed by 246 LW articles already. If there is no consensus, people will disagree naturally.
I’m not trying to spawn new contrarians for the sake of having more contrarians, nor want to encourage debate for the sake of having more disagreements. What I care about is (me personally as well as this community as a whole) having correct beliefs on the topics that I think are most important, namely the core rationality and Singularity-related topics, and I think having more contrarians who disagree about these core topics would help with that. Your suggestion doesn’t seem to help with my goals, or at least it’s not obvious to me how it would.
(BTW, I note that you’ve personally made 2 meta/community posts out of 7, whereas I’ve only made about 3 out of 58 (plus or minus a few counting errors). So maybe you can give me a pass on this one? :)
I note that you’ve personally made 2 meta/community posts out of 7, whereas I’ve only made about 3 out of 58
I plead guilty and promise to avoid making meta posts in the future. (Edit: I don’t object specifically to your meta-posts but to the overall relative number of meta discussions lately.)
Nevertheless, I doubt calling for more contrarians is helpful with respect to your purposes. The question how to increase the number of contrarians is naturally answered by proposals to create more contrarian-friendly environment, which, if implemented, attract disproportionally high amount of people who like to be contrarians, whatever the local orthodoxy is. My suggestion is, instead, to try to attract more diverse set of people, even those who are not interested in topics you consider important. You would profit indirectly, since some of them would get eventually engaged in your favourite discussions and bring fresh ideas. Incidentally they will also somewhat lower the level of discourse, but I am afraid it is an inevitable side effect of any anti-cult policy.
Do you also think that having more contrarians who disagree that “2+2=4” would increase our likelihood of having correct beliefs? I mean, if they are wrong, we will see the weakness in their arguments and refuse to update, so there is no harm; but if they are right and we are wrong, it could be very helpful.
More generally, what is your algorithm for deciding for which values of X we need more contrarians who disagree with X?
If people come to LessWrong thinking “2+2 != 4” or “computer manufacturing isn’t science”, is saying “You’re stupid” really raising the sanity line in any way? In short, we should distinguish between punishing disagreement and punishing obstinate behavior/contrarianism.
This. It’s obviously very possible that this was a troll, but that’s not my read.
Edit: There were one or two others talking a lot without contributing much that seemed to be the impetus for this discussion post. Wei Dai’s post seems to be a reaction to that post.
It waxes and wanes. Try looking at all articles labeled “meta”; there were 10(!) in April of 2009 that fit your description of meta-debates (arguing about the karma system, the proper use of the wiki, the first survey, and an Eliezer post about getting less meta).
Granted, that was near the beginning of Less Wrong… but then there was another burst with 5 such articles in April 2010 as well. (I don’t know what it is about springtime...) Starting the Discussion area in September 2010 seems to have siphoned most of it off of Main; there have been 3-5 meta-ish posts per month since then (except for April 2011, in which there were 9… seriously, what the hell is going on here?)
I don’t see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it’s what we’re best at, and that’s always been so.
You post a question about calculus in the discussion section and get downvoted, since it is “off topic”—ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted.
I don’t see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it’s what we’re best at, and that’s always been so.
Yes, but the real question is why we love going meta. What is it about going meta that makes it worthwhile to us? Some have postulated that people here are actually addicted to going meta because it is easier to go meta than to actually do stuff, and yet despite the lack of real effort, you can tell yourself that going meta adds significant value because it helps change some insight or process once but seems to deliver recurring payoffs every time the insight or process is used again in the future…
...but I have a sneaking suspicion that this theory was just a pat answer that was offered as a status move, because going meta on going meta puts one in a position of objective examination of mere object level meta-ness. To understand something well helps one control the thing understood, and the understanding may have required power over the thing to learn the lessons in the first place. Clearly, therefore, going meta on a process would pattern match to being superior to the process or the people who perform it, which might push one’s buttons if, for example, one were a narcissist.
I dare not speculate on the true meaning and function of going meta on going meta on going meta, but if I were forced to guess, I think it might have something to do with a sort of ironic humor over the appearance of mechanical repetitiveness as one iterates a generic “going meta” operation that some might naively have supposed to be the essence of human mental flexibility. Mental flexibility from a mechanical gimmick? Never!
Truly, we should all collectively pity the person who goes meta on going meta on going meta on going meta, because their ironically humorous detachment is such a shallow trick, and yet it is likely to leave them alienated from the world, and potentially bitter at its callous lack of self-aware appreciation for that person’s jokes.
Related question: If the concept of meta is drawn from a distribution, or is an instance of a higher-level abstraction, what concept is best characterized by that distribution itself / that higher-level abstraction itself? If we seek whence cometh “seek whence”, is the answer just “seek whence”? (Related: Schmidhuber’s discussion about how Goedel machines collapse all the levels of meta-optimization into a single level. (Related: Eliezer’s Loebian critique of Goedel machines.))
I laughed this morning when I read this, and thought “Yay! Theism!” which sort of demands being shortened to yaytheism… which sounds so much like atheism that the handful of examples I could find mostly occur in the context of atheism.
It would be funny to use the word “yaytheism” for what could be tabooed as “anthropomorphizing meta-aware computational idealism”, because it frequently seems that humor is associated with the relevant thoughts :-)
But going anthropomorphic seems to me like playing with fire. Specifically: I suspect it helps with some emotional reactions and pedagogical limitations, but it seems able to cause non-productive emotional reactions and tenacious confusions as a side effect. For example, I think the most people are better off thinking about “natural selection” (mechanistic) over either “Azathoth, the blind idiot god” (anthropomorphic with negative valence) or “Gaia” (anthropomorphic with positive valence).
Edited To Add: You can loop this back to the question about contrarians, if you notice how much friction occurs around the tone of discussion of mind-shaped-stuff. You need to talk about mind-shaped-things when talking about cogsci/AI/singularity topics, but it’s a “mindfield” of lurking faux paus and tribal triggers.
The following was hastily written, apologies for errors.
But going anthropomorphic seems to me like playing with fire. Specifically: I suspect it helps with some emotional reactions and pedagogical limitations, but it seems able to cause non-productive emotional reactions and tenacious confusions as a side effect. For example, I think the most people are better off thinking about “natural selection” (mechanistic) over either “Azathoth, the blind idiot god” (anthropomorphic with negative valence) or “Gaia” (anthropomorphic with positive valence).
(I would go farther, and suggest not even thinking about “natural selection” in the abstract, but specific ecological contingencies and selection pressures and especially the sorts of “pattern attractors” from complex systems. If I think about “evolution” I get this idea of a mysterious propelling force rather than about how the optimization pressure comes from the actual environment. Alternatively, Vassar’s previously emphasized thinking of evolution as mere statistical tendency, not an optimizer as such;—or something like that.)
I think one thing to keep in mind is that there is a reverse case of the anthropomorphic error, which is the pantheistic/Gnostic error, and that Catholic theologians were often striving hard to carefully distinguish their conception of God from mystical or superstitious conceptions, or conceptions that assigned God no direct role in the physical universe. But yeah, at some point this emphasis seems to have hurt the Church, ’cuz I see a lot of atheists thinking that Christians think that God is basically Zeus, i.e. a sky father that is sometimes a slave to human passions, rather than a Being that takes game theoretic actions which are causally isomorphic to the outputs of certain emotions to the extent that those emotions were evolutionary selected for (i.e. given to men by God) for rational game theoretic reasons. The Church traditionally was good at toeing this line and appealing to people of very different intelligences, having a more anthropomorphic God for the commoners and a more philosophical God for the monks and priests, but I guess somewhere along the way this balance was lost. I’m tempted to blame the Devil working on the side of the Reformation and the Enlightenment but I suppose realistically some blame must fall on the temporal Church.
Alternatively, maybe you do accept Neoplatonist or Catharian thinking where we have infinitely meta-aware computational agents as abstractions without any direct physical effect that isn’t screened off by the Demiurge (or cosmological natural selection or what have you). In that case I tentatively disagree, but my thoughts aren’t organized well enough for me to concisely explain why.
I thought of this Mitchell Porter post on MWI and this puzzle post by Thomas. As it happens, I downvoted both (though after a while, I dropped the downvote from the latter) and would defend those downvotes, but I can see how prase gets the impression that we only upvote articles on a narrow subset of topics.
As for physics, I was thinking more about this whose negative karma I have already commented on. In the meantime I have forgotten that the post managed to return to zero afterwards.
“Low-quality” is too general a justification to recognise the detailed reasons of downvotes. Among the more concrete criticisms I recall many “this is off-topic, hence my voting down” reactions. My memories may be subject to bias, of course, and I don’t want to spend time making a more reliable statistics. What I am feeling more certain about is, however, that there are many people who wish to keep all debates relevant to rationality, which effectively denotes an accidental set of topics, roughly {AI, charity donations, meta-ethics, evolution psychology, self-improvement, cognitive biases, Bayesian probability}. No doubt those topics are interesting, even for me. But not so much to keep me engaged after three (or how much exactly) years of LW’s existence. And since I disagree with many standard LW memes, I suppose there may be other potential “contrarians” (perhaps more willing to voice their disagreements than I am) becoming slowly disinterested for reasons similar to mine.
As for physics, I was thinking more about this whose negative karma I have already commented on. In the meantime I have forgotten that the post managed to return to zero afterwards.
Yes, it’s sitting at +1 here and sitting at +2 at physics stackexchange. This supports the opposite of your view, suggesting that physics questions are almost as on-topic here as they are at physics stackexchange—which is surely too on-topic.
I thought of this Mitchell Porter post on MWI and this puzzle post by Thomas. As it happens, I downvoted both (though after a while, I dropped the downvote from the latter) and would defend those downvotes, but I can see how prase gets the impression that we only upvote articles on a narrow subset of topics.
Wow. The first one is only at −2? That’s troubling. Ahh, nevermind.
Going meta is not only what we love best, it’s what we’re best at, and that’s always been so.
Do we love going meta? Yes, we do.
Are we good at it? Sometimes yes, sometimes no; it also depends on the individual. But going meta is good for signalling intelligence, so we do it even when it’s just a waste of time.
Has it always been so? Yes, unpracticality and procrastination of many intelligent people is widely known.
The Akrasia you refer to is actually a feature, not a bug.
Just picture the opposite: Intelligent people rushing to conclusions and caring more about getting stuff done instead of forsaking the urge to go with first answers and actually think.
My point is, we decry procrastination so much but the fact is it is good that we procrastinate, if we didn’t have this tendency we would be doers not thinkers.
Not that I’m disparaging either, but you can’t rush math, or more generally deep, insightful thought, that way lies politics and insanity.
In an nutshell, perhaps we care more for thinking about things -or alternatively get a rush from the intellectual crack- so much that we don’t really want to act, or at least don’t want to act on incomplete knowledge, and hence the widespread procrastination, which given the alternative, is a very Good thing.
It seems to follow from this model that if we measure the tendency towards procrastination in two groups, one of which is selected for their demonstrable capability for math, or more generally for deep, insightful thought, and the other of which is not, we should find that the former group procrastinates more than the latter group.
Hm. You seem to have edited the comment after I responded to it, in such a way that makes me want to take back my response. How would we tell whether the former group needs to more actively combat procrastination?
I would be surprised because it’s significantly at odds with my experience of the relationship between procrastination and insight.
I have a habit of editing a comment for a bit after replying, actually I didn’t see your response until after editing, I don’t see how this changes your response in this instance though?
I added that caveat since the former group might have members who originally suffered more from procrastination as per the model, but eventually learned to deal with it, this might skew results if not taken into account.
It changes my response because while I kind of understand how to operationalize “group A procrastinates more than group B” I don’t quite understand how to operationalize “group A needs to more actively combat procrastination than group B.” Since what i was approving of was precisely the concreteness of the prediction, swapping it out for something I understand less concretely left me less approving.
This is a good point. Maybe future meta-discussions could be on talk pages for wiki articles, about specific changes to those articles, especially the about page and the FAQ? These actually represent how LW culture is being codified for new users, but unfortunately none of the recent debates seem to of resulted in substantial modification to them.
It’s too bad that automatic wiki editing privileges don’t come with a certain level of karma; would remove a trivial inconvenience and eliminate wiki spam.
It’s too bad that automatic wiki editing privileges don’t come with a certain level of karma
Hmmm… you know that wouldn’t be too hard to arrange. Keeping the passwords in sync after a change to one account would be much more work, but might be ignorable.
Ideally it seems like you would get your wiki authentication cookie automatically after logging into Less Wrong, so you could log in once and use both. I don’t know if that changes things regarding passwords.
You post a question about calculus in the discussion section and get downvoted, since it is “off topic”—ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted.
Do you have examples of this sort of stuff so I can go vote it up?
I have significantly decreased my participation on LW discussions recently, partly for reasons unrelated to whatever is going on here, but I have few issues with the present state of this site and perhaps they are relevant:
LW seems to be slowly becoming self-obsessed. “How do we get better contrarians?” “What should be our debate policies?” “Should discussing politics be banned on LW?” “Is LW a phyg?” “Shouldn’t LW become more of a phyg?” Damn. I am not interested in endless meta-debates about community building. Meta debates could be fine, but only if they are rare—else I feel I am losing purposes. Object-level topics should form an overwhelming majority both in the main section and in the discussion.
Too narrow set of topics. Somewhat ironically the explicitly forbidden politics is debated quite frequently, but many potentially interesting areas of inquiry are left out completely. You post a question about calculus in the discussion section and get downvoted, since it is “off topic”—ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted. But there is only so much one can say about AI and ethics and Bayesian epistemology and self-improvement on a level accessible to general internet audience. When I discovered Overcoming Bias (whose half later evolved into LW), it was overflowing with revolutionary and inspiring (from my point of view) ideas. Now I feel saturated as majority of new articles seem to be devoid of new insights (again from my point of view).
If you are afraid that LW could devolve into a dogmatic narrow community without enough contrarians to maintain high level of epistemic hygiene, don’t try to spawn new contrarians by methods of social engineering. Instead try to encourage debates on diverse set of topics, mainly those which haven’t been addressed by 246 LW articles already. If there is no consensus, people will disagree naturally.
I’m not trying to spawn new contrarians for the sake of having more contrarians, nor want to encourage debate for the sake of having more disagreements. What I care about is (me personally as well as this community as a whole) having correct beliefs on the topics that I think are most important, namely the core rationality and Singularity-related topics, and I think having more contrarians who disagree about these core topics would help with that. Your suggestion doesn’t seem to help with my goals, or at least it’s not obvious to me how it would.
(BTW, I note that you’ve personally made 2 meta/community posts out of 7, whereas I’ve only made about 3 out of 58 (plus or minus a few counting errors). So maybe you can give me a pass on this one? :)
I plead guilty and promise to avoid making meta posts in the future. (Edit: I don’t object specifically to your meta-posts but to the overall relative number of meta discussions lately.)
Nevertheless, I doubt calling for more contrarians is helpful with respect to your purposes. The question how to increase the number of contrarians is naturally answered by proposals to create more contrarian-friendly environment, which, if implemented, attract disproportionally high amount of people who like to be contrarians, whatever the local orthodoxy is. My suggestion is, instead, to try to attract more diverse set of people, even those who are not interested in topics you consider important. You would profit indirectly, since some of them would get eventually engaged in your favourite discussions and bring fresh ideas. Incidentally they will also somewhat lower the level of discourse, but I am afraid it is an inevitable side effect of any anti-cult policy.
Do you also think that having more contrarians who disagree that “2+2=4” would increase our likelihood of having correct beliefs? I mean, if they are wrong, we will see the weakness in their arguments and refuse to update, so there is no harm; but if they are right and we are wrong, it could be very helpful.
More generally, what is your algorithm for deciding for which values of X we need more contrarians who disagree with X?
If people come to LessWrong thinking “2+2 != 4” or “computer manufacturing isn’t science”, is saying “You’re stupid” really raising the sanity line in any way? In short, we should distinguish between punishing disagreement and punishing obstinate behavior/contrarianism.
Well, computer manufacturing isn’t science, it’s engineering.
If someone says, “I believe in computers and GPS, but not quantum mechanics or science” then they are deeply confused.
Has there been a glut of those on LessWrong?
This. It’s obviously very possible that this was a troll, but that’s not my read.
Edit: There were one or two others talking a lot without contributing much that seemed to be the impetus for this discussion post. Wei Dai’s post seems to be a reaction to that post.
It waxes and wanes. Try looking at all articles labeled “meta”; there were 10(!) in April of 2009 that fit your description of meta-debates (arguing about the karma system, the proper use of the wiki, the first survey, and an Eliezer post about getting less meta).
Granted, that was near the beginning of Less Wrong… but then there was another burst with 5 such articles in April 2010 as well. (I don’t know what it is about springtime...) Starting the Discussion area in September 2010 seems to have siphoned most of it off of Main; there have been 3-5 meta-ish posts per month since then (except for April 2011, in which there were 9… seriously, what the hell is going on here?)
Maybe April Fools day gets people’s juices going?
I don’t see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it’s what we’re best at, and that’s always been so.
Whut?
Links or it didn’t happen.
Yes, but the real question is why we love going meta. What is it about going meta that makes it worthwhile to us? Some have postulated that people here are actually addicted to going meta because it is easier to go meta than to actually do stuff, and yet despite the lack of real effort, you can tell yourself that going meta adds significant value because it helps change some insight or process once but seems to deliver recurring payoffs every time the insight or process is used again in the future…
...but I have a sneaking suspicion that this theory was just a pat answer that was offered as a status move, because going meta on going meta puts one in a position of objective examination of mere object level meta-ness. To understand something well helps one control the thing understood, and the understanding may have required power over the thing to learn the lessons in the first place. Clearly, therefore, going meta on a process would pattern match to being superior to the process or the people who perform it, which might push one’s buttons if, for example, one were a narcissist.
I dare not speculate on the true meaning and function of going meta on going meta on going meta, but if I were forced to guess, I think it might have something to do with a sort of ironic humor over the appearance of mechanical repetitiveness as one iterates a generic “going meta” operation that some might naively have supposed to be the essence of human mental flexibility. Mental flexibility from a mechanical gimmick? Never!
Truly, we should all collectively pity the person who goes meta on going meta on going meta on going meta, because their ironically humorous detachment is such a shallow trick, and yet it is likely to leave them alienated from the world, and potentially bitter at its callous lack of self-aware appreciation for that person’s jokes.
Related question: If the concept of meta is drawn from a distribution, or is an instance of a higher-level abstraction, what concept is best characterized by that distribution itself / that higher-level abstraction itself? If we seek whence cometh “seek whence”, is the answer just “seek whence”? (Related: Schmidhuber’s discussion about how Goedel machines collapse all the levels of meta-optimization into a single level. (Related: Eliezer’s Loebian critique of Goedel machines.))
I laughed this morning when I read this, and thought “Yay! Theism!” which sort of demands being shortened to yaytheism… which sounds so much like atheism that the handful of examples I could find mostly occur in the context of atheism.
It would be funny to use the word “yaytheism” for what could be tabooed as “anthropomorphizing meta-aware computational idealism”, because it frequently seems that humor is associated with the relevant thoughts :-)
But going anthropomorphic seems to me like playing with fire. Specifically: I suspect it helps with some emotional reactions and pedagogical limitations, but it seems able to cause non-productive emotional reactions and tenacious confusions as a side effect. For example, I think the most people are better off thinking about “natural selection” (mechanistic) over either “Azathoth, the blind idiot god” (anthropomorphic with negative valence) or “Gaia” (anthropomorphic with positive valence).
Edited To Add: You can loop this back to the question about contrarians, if you notice how much friction occurs around the tone of discussion of mind-shaped-stuff. You need to talk about mind-shaped-things when talking about cogsci/AI/singularity topics, but it’s a “mindfield” of lurking faux paus and tribal triggers.
The following was hastily written, apologies for errors.
(I would go farther, and suggest not even thinking about “natural selection” in the abstract, but specific ecological contingencies and selection pressures and especially the sorts of “pattern attractors” from complex systems. If I think about “evolution” I get this idea of a mysterious propelling force rather than about how the optimization pressure comes from the actual environment. Alternatively, Vassar’s previously emphasized thinking of evolution as mere statistical tendency, not an optimizer as such;—or something like that.)
I think one thing to keep in mind is that there is a reverse case of the anthropomorphic error, which is the pantheistic/Gnostic error, and that Catholic theologians were often striving hard to carefully distinguish their conception of God from mystical or superstitious conceptions, or conceptions that assigned God no direct role in the physical universe. But yeah, at some point this emphasis seems to have hurt the Church, ’cuz I see a lot of atheists thinking that Christians think that God is basically Zeus, i.e. a sky father that is sometimes a slave to human passions, rather than a Being that takes game theoretic actions which are causally isomorphic to the outputs of certain emotions to the extent that those emotions were evolutionary selected for (i.e. given to men by God) for rational game theoretic reasons. The Church traditionally was good at toeing this line and appealing to people of very different intelligences, having a more anthropomorphic God for the commoners and a more philosophical God for the monks and priests, but I guess somewhere along the way this balance was lost. I’m tempted to blame the Devil working on the side of the Reformation and the Enlightenment but I suppose realistically some blame must fall on the temporal Church.
Alternatively, maybe you do accept Neoplatonist or Catharian thinking where we have infinitely meta-aware computational agents as abstractions without any direct physical effect that isn’t screened off by the Demiurge (or cosmological natural selection or what have you). In that case I tentatively disagree, but my thoughts aren’t organized well enough for me to concisely explain why.
Damn. You just got metametameta.
I thought of this Mitchell Porter post on MWI and this puzzle post by Thomas. As it happens, I downvoted both (though after a while, I dropped the downvote from the latter) and would defend those downvotes, but I can see how prase gets the impression that we only upvote articles on a narrow subset of topics.
Yeah, both of those are low-quality.
As for physics, I was thinking more about this whose negative karma I have already commented on. In the meantime I have forgotten that the post managed to return to zero afterwards.
“Low-quality” is too general a justification to recognise the detailed reasons of downvotes. Among the more concrete criticisms I recall many “this is off-topic, hence my voting down” reactions. My memories may be subject to bias, of course, and I don’t want to spend time making a more reliable statistics. What I am feeling more certain about is, however, that there are many people who wish to keep all debates relevant to rationality, which effectively denotes an accidental set of topics, roughly {AI, charity donations, meta-ethics, evolution psychology, self-improvement, cognitive biases, Bayesian probability}. No doubt those topics are interesting, even for me. But not so much to keep me engaged after three (or how much exactly) years of LW’s existence. And since I disagree with many standard LW memes, I suppose there may be other potential “contrarians” (perhaps more willing to voice their disagreements than I am) becoming slowly disinterested for reasons similar to mine.
Yes, it’s sitting at +1 here and sitting at +2 at physics stackexchange. This supports the opposite of your view, suggesting that physics questions are almost as on-topic here as they are at physics stackexchange—which is surely too on-topic.
Wow. The first one is only at −2? That’s troubling. Ahh, nevermind.
Do we love going meta? Yes, we do.
Are we good at it? Sometimes yes, sometimes no; it also depends on the individual. But going meta is good for signalling intelligence, so we do it even when it’s just a waste of time.
Has it always been so? Yes, unpracticality and procrastination of many intelligent people is widely known.
The Akrasia you refer to is actually a feature, not a bug. Just picture the opposite: Intelligent people rushing to conclusions and caring more about getting stuff done instead of forsaking the urge to go with first answers and actually think.
My point is, we decry procrastination so much but the fact is it is good that we procrastinate, if we didn’t have this tendency we would be doers not thinkers. Not that I’m disparaging either, but you can’t rush math, or more generally deep, insightful thought, that way lies politics and insanity.
In an nutshell, perhaps we care more for thinking about things -or alternatively get a rush from the intellectual crack- so much that we don’t really want to act, or at least don’t want to act on incomplete knowledge, and hence the widespread procrastination, which given the alternative, is a very Good thing.
It seems to follow from this model that if we measure the tendency towards procrastination in two groups, one of which is selected for their demonstrable capability for math, or more generally for deep, insightful thought, and the other of which is not, we should find that the former group procrastinates more than the latter group.
Yes?
Yes & I’d modify that slightly to “the former group needs to more actively combat procrastination”.
Upvoted for not backing away from a concrete prediction.
I would be very surprised by that result.
Upvoted for good reasons for upvoting :)
For data, we could run a LW poll as a start and see. And out of curiosity, why would you be surprised?
Hm. You seem to have edited the comment after I responded to it, in such a way that makes me want to take back my response. How would we tell whether the former group needs to more actively combat procrastination?
I would be surprised because it’s significantly at odds with my experience of the relationship between procrastination and insight.
I have a habit of editing a comment for a bit after replying, actually I didn’t see your response until after editing, I don’t see how this changes your response in this instance though?
I added that caveat since the former group might have members who originally suffered more from procrastination as per the model, but eventually learned to deal with it, this might skew results if not taken into account.
It changes my response because while I kind of understand how to operationalize “group A procrastinates more than group B” I don’t quite understand how to operationalize “group A needs to more actively combat procrastination than group B.” Since what i was approving of was precisely the concreteness of the prediction, swapping it out for something I understand less concretely left me less approving.
This is a good point. Maybe future meta-discussions could be on talk pages for wiki articles, about specific changes to those articles, especially the about page and the FAQ? These actually represent how LW culture is being codified for new users, but unfortunately none of the recent debates seem to of resulted in substantial modification to them.
It’s too bad that automatic wiki editing privileges don’t come with a certain level of karma; would remove a trivial inconvenience and eliminate wiki spam.
Hmmm… you know that wouldn’t be too hard to arrange. Keeping the passwords in sync after a change to one account would be much more work, but might be ignorable.
Ideally it seems like you would get your wiki authentication cookie automatically after logging into Less Wrong, so you could log in once and use both. I don’t know if that changes things regarding passwords.
Do you have examples of this sort of stuff so I can go vote it up?
For example there are many posts tagged “physics”, most of which hover around zero. A moderately interesting puzzle stands now at −7.