I don’t see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it’s what we’re best at, and that’s always been so.
You post a question about calculus in the discussion section and get downvoted, since it is “off topic”—ask on MathOverflow. A question about biology? Downvoted, if it is not an ev-psych speculation. Physics? Downvoted, even if it is of the most popular QM-interpretational sort. A puzzle? Downvoted.
I don’t see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it’s what we’re best at, and that’s always been so.
Yes, but the real question is why we love going meta. What is it about going meta that makes it worthwhile to us? Some have postulated that people here are actually addicted to going meta because it is easier to go meta than to actually do stuff, and yet despite the lack of real effort, you can tell yourself that going meta adds significant value because it helps change some insight or process once but seems to deliver recurring payoffs every time the insight or process is used again in the future…
...but I have a sneaking suspicion that this theory was just a pat answer that was offered as a status move, because going meta on going meta puts one in a position of objective examination of mere object level meta-ness. To understand something well helps one control the thing understood, and the understanding may have required power over the thing to learn the lessons in the first place. Clearly, therefore, going meta on a process would pattern match to being superior to the process or the people who perform it, which might push one’s buttons if, for example, one were a narcissist.
I dare not speculate on the true meaning and function of going meta on going meta on going meta, but if I were forced to guess, I think it might have something to do with a sort of ironic humor over the appearance of mechanical repetitiveness as one iterates a generic “going meta” operation that some might naively have supposed to be the essence of human mental flexibility. Mental flexibility from a mechanical gimmick? Never!
Truly, we should all collectively pity the person who goes meta on going meta on going meta on going meta, because their ironically humorous detachment is such a shallow trick, and yet it is likely to leave them alienated from the world, and potentially bitter at its callous lack of self-aware appreciation for that person’s jokes.
Related question: If the concept of meta is drawn from a distribution, or is an instance of a higher-level abstraction, what concept is best characterized by that distribution itself / that higher-level abstraction itself? If we seek whence cometh “seek whence”, is the answer just “seek whence”? (Related: Schmidhuber’s discussion about how Goedel machines collapse all the levels of meta-optimization into a single level. (Related: Eliezer’s Loebian critique of Goedel machines.))
I laughed this morning when I read this, and thought “Yay! Theism!” which sort of demands being shortened to yaytheism… which sounds so much like atheism that the handful of examples I could find mostly occur in the context of atheism.
It would be funny to use the word “yaytheism” for what could be tabooed as “anthropomorphizing meta-aware computational idealism”, because it frequently seems that humor is associated with the relevant thoughts :-)
But going anthropomorphic seems to me like playing with fire. Specifically: I suspect it helps with some emotional reactions and pedagogical limitations, but it seems able to cause non-productive emotional reactions and tenacious confusions as a side effect. For example, I think the most people are better off thinking about “natural selection” (mechanistic) over either “Azathoth, the blind idiot god” (anthropomorphic with negative valence) or “Gaia” (anthropomorphic with positive valence).
Edited To Add: You can loop this back to the question about contrarians, if you notice how much friction occurs around the tone of discussion of mind-shaped-stuff. You need to talk about mind-shaped-things when talking about cogsci/AI/singularity topics, but it’s a “mindfield” of lurking faux paus and tribal triggers.
The following was hastily written, apologies for errors.
But going anthropomorphic seems to me like playing with fire. Specifically: I suspect it helps with some emotional reactions and pedagogical limitations, but it seems able to cause non-productive emotional reactions and tenacious confusions as a side effect. For example, I think the most people are better off thinking about “natural selection” (mechanistic) over either “Azathoth, the blind idiot god” (anthropomorphic with negative valence) or “Gaia” (anthropomorphic with positive valence).
(I would go farther, and suggest not even thinking about “natural selection” in the abstract, but specific ecological contingencies and selection pressures and especially the sorts of “pattern attractors” from complex systems. If I think about “evolution” I get this idea of a mysterious propelling force rather than about how the optimization pressure comes from the actual environment. Alternatively, Vassar’s previously emphasized thinking of evolution as mere statistical tendency, not an optimizer as such;—or something like that.)
I think one thing to keep in mind is that there is a reverse case of the anthropomorphic error, which is the pantheistic/Gnostic error, and that Catholic theologians were often striving hard to carefully distinguish their conception of God from mystical or superstitious conceptions, or conceptions that assigned God no direct role in the physical universe. But yeah, at some point this emphasis seems to have hurt the Church, ’cuz I see a lot of atheists thinking that Christians think that God is basically Zeus, i.e. a sky father that is sometimes a slave to human passions, rather than a Being that takes game theoretic actions which are causally isomorphic to the outputs of certain emotions to the extent that those emotions were evolutionary selected for (i.e. given to men by God) for rational game theoretic reasons. The Church traditionally was good at toeing this line and appealing to people of very different intelligences, having a more anthropomorphic God for the commoners and a more philosophical God for the monks and priests, but I guess somewhere along the way this balance was lost. I’m tempted to blame the Devil working on the side of the Reformation and the Enlightenment but I suppose realistically some blame must fall on the temporal Church.
Alternatively, maybe you do accept Neoplatonist or Catharian thinking where we have infinitely meta-aware computational agents as abstractions without any direct physical effect that isn’t screened off by the Demiurge (or cosmological natural selection or what have you). In that case I tentatively disagree, but my thoughts aren’t organized well enough for me to concisely explain why.
I thought of this Mitchell Porter post on MWI and this puzzle post by Thomas. As it happens, I downvoted both (though after a while, I dropped the downvote from the latter) and would defend those downvotes, but I can see how prase gets the impression that we only upvote articles on a narrow subset of topics.
As for physics, I was thinking more about this whose negative karma I have already commented on. In the meantime I have forgotten that the post managed to return to zero afterwards.
“Low-quality” is too general a justification to recognise the detailed reasons of downvotes. Among the more concrete criticisms I recall many “this is off-topic, hence my voting down” reactions. My memories may be subject to bias, of course, and I don’t want to spend time making a more reliable statistics. What I am feeling more certain about is, however, that there are many people who wish to keep all debates relevant to rationality, which effectively denotes an accidental set of topics, roughly {AI, charity donations, meta-ethics, evolution psychology, self-improvement, cognitive biases, Bayesian probability}. No doubt those topics are interesting, even for me. But not so much to keep me engaged after three (or how much exactly) years of LW’s existence. And since I disagree with many standard LW memes, I suppose there may be other potential “contrarians” (perhaps more willing to voice their disagreements than I am) becoming slowly disinterested for reasons similar to mine.
As for physics, I was thinking more about this whose negative karma I have already commented on. In the meantime I have forgotten that the post managed to return to zero afterwards.
Yes, it’s sitting at +1 here and sitting at +2 at physics stackexchange. This supports the opposite of your view, suggesting that physics questions are almost as on-topic here as they are at physics stackexchange—which is surely too on-topic.
I thought of this Mitchell Porter post on MWI and this puzzle post by Thomas. As it happens, I downvoted both (though after a while, I dropped the downvote from the latter) and would defend those downvotes, but I can see how prase gets the impression that we only upvote articles on a narrow subset of topics.
Wow. The first one is only at −2? That’s troubling. Ahh, nevermind.
Going meta is not only what we love best, it’s what we’re best at, and that’s always been so.
Do we love going meta? Yes, we do.
Are we good at it? Sometimes yes, sometimes no; it also depends on the individual. But going meta is good for signalling intelligence, so we do it even when it’s just a waste of time.
Has it always been so? Yes, unpracticality and procrastination of many intelligent people is widely known.
The Akrasia you refer to is actually a feature, not a bug.
Just picture the opposite: Intelligent people rushing to conclusions and caring more about getting stuff done instead of forsaking the urge to go with first answers and actually think.
My point is, we decry procrastination so much but the fact is it is good that we procrastinate, if we didn’t have this tendency we would be doers not thinkers.
Not that I’m disparaging either, but you can’t rush math, or more generally deep, insightful thought, that way lies politics and insanity.
In an nutshell, perhaps we care more for thinking about things -or alternatively get a rush from the intellectual crack- so much that we don’t really want to act, or at least don’t want to act on incomplete knowledge, and hence the widespread procrastination, which given the alternative, is a very Good thing.
It seems to follow from this model that if we measure the tendency towards procrastination in two groups, one of which is selected for their demonstrable capability for math, or more generally for deep, insightful thought, and the other of which is not, we should find that the former group procrastinates more than the latter group.
Hm. You seem to have edited the comment after I responded to it, in such a way that makes me want to take back my response. How would we tell whether the former group needs to more actively combat procrastination?
I would be surprised because it’s significantly at odds with my experience of the relationship between procrastination and insight.
I have a habit of editing a comment for a bit after replying, actually I didn’t see your response until after editing, I don’t see how this changes your response in this instance though?
I added that caveat since the former group might have members who originally suffered more from procrastination as per the model, but eventually learned to deal with it, this might skew results if not taken into account.
It changes my response because while I kind of understand how to operationalize “group A procrastinates more than group B” I don’t quite understand how to operationalize “group A needs to more actively combat procrastination than group B.” Since what i was approving of was precisely the concreteness of the prediction, swapping it out for something I understand less concretely left me less approving.
I don’t see how you could possibly be observing that trend. The earliest active comment threads on Less Wrong were voting / karma debates. Going meta is not only what we love best, it’s what we’re best at, and that’s always been so.
Whut?
Links or it didn’t happen.
Yes, but the real question is why we love going meta. What is it about going meta that makes it worthwhile to us? Some have postulated that people here are actually addicted to going meta because it is easier to go meta than to actually do stuff, and yet despite the lack of real effort, you can tell yourself that going meta adds significant value because it helps change some insight or process once but seems to deliver recurring payoffs every time the insight or process is used again in the future…
...but I have a sneaking suspicion that this theory was just a pat answer that was offered as a status move, because going meta on going meta puts one in a position of objective examination of mere object level meta-ness. To understand something well helps one control the thing understood, and the understanding may have required power over the thing to learn the lessons in the first place. Clearly, therefore, going meta on a process would pattern match to being superior to the process or the people who perform it, which might push one’s buttons if, for example, one were a narcissist.
I dare not speculate on the true meaning and function of going meta on going meta on going meta, but if I were forced to guess, I think it might have something to do with a sort of ironic humor over the appearance of mechanical repetitiveness as one iterates a generic “going meta” operation that some might naively have supposed to be the essence of human mental flexibility. Mental flexibility from a mechanical gimmick? Never!
Truly, we should all collectively pity the person who goes meta on going meta on going meta on going meta, because their ironically humorous detachment is such a shallow trick, and yet it is likely to leave them alienated from the world, and potentially bitter at its callous lack of self-aware appreciation for that person’s jokes.
Related question: If the concept of meta is drawn from a distribution, or is an instance of a higher-level abstraction, what concept is best characterized by that distribution itself / that higher-level abstraction itself? If we seek whence cometh “seek whence”, is the answer just “seek whence”? (Related: Schmidhuber’s discussion about how Goedel machines collapse all the levels of meta-optimization into a single level. (Related: Eliezer’s Loebian critique of Goedel machines.))
I laughed this morning when I read this, and thought “Yay! Theism!” which sort of demands being shortened to yaytheism… which sounds so much like atheism that the handful of examples I could find mostly occur in the context of atheism.
It would be funny to use the word “yaytheism” for what could be tabooed as “anthropomorphizing meta-aware computational idealism”, because it frequently seems that humor is associated with the relevant thoughts :-)
But going anthropomorphic seems to me like playing with fire. Specifically: I suspect it helps with some emotional reactions and pedagogical limitations, but it seems able to cause non-productive emotional reactions and tenacious confusions as a side effect. For example, I think the most people are better off thinking about “natural selection” (mechanistic) over either “Azathoth, the blind idiot god” (anthropomorphic with negative valence) or “Gaia” (anthropomorphic with positive valence).
Edited To Add: You can loop this back to the question about contrarians, if you notice how much friction occurs around the tone of discussion of mind-shaped-stuff. You need to talk about mind-shaped-things when talking about cogsci/AI/singularity topics, but it’s a “mindfield” of lurking faux paus and tribal triggers.
The following was hastily written, apologies for errors.
(I would go farther, and suggest not even thinking about “natural selection” in the abstract, but specific ecological contingencies and selection pressures and especially the sorts of “pattern attractors” from complex systems. If I think about “evolution” I get this idea of a mysterious propelling force rather than about how the optimization pressure comes from the actual environment. Alternatively, Vassar’s previously emphasized thinking of evolution as mere statistical tendency, not an optimizer as such;—or something like that.)
I think one thing to keep in mind is that there is a reverse case of the anthropomorphic error, which is the pantheistic/Gnostic error, and that Catholic theologians were often striving hard to carefully distinguish their conception of God from mystical or superstitious conceptions, or conceptions that assigned God no direct role in the physical universe. But yeah, at some point this emphasis seems to have hurt the Church, ’cuz I see a lot of atheists thinking that Christians think that God is basically Zeus, i.e. a sky father that is sometimes a slave to human passions, rather than a Being that takes game theoretic actions which are causally isomorphic to the outputs of certain emotions to the extent that those emotions were evolutionary selected for (i.e. given to men by God) for rational game theoretic reasons. The Church traditionally was good at toeing this line and appealing to people of very different intelligences, having a more anthropomorphic God for the commoners and a more philosophical God for the monks and priests, but I guess somewhere along the way this balance was lost. I’m tempted to blame the Devil working on the side of the Reformation and the Enlightenment but I suppose realistically some blame must fall on the temporal Church.
Alternatively, maybe you do accept Neoplatonist or Catharian thinking where we have infinitely meta-aware computational agents as abstractions without any direct physical effect that isn’t screened off by the Demiurge (or cosmological natural selection or what have you). In that case I tentatively disagree, but my thoughts aren’t organized well enough for me to concisely explain why.
Damn. You just got metametameta.
I thought of this Mitchell Porter post on MWI and this puzzle post by Thomas. As it happens, I downvoted both (though after a while, I dropped the downvote from the latter) and would defend those downvotes, but I can see how prase gets the impression that we only upvote articles on a narrow subset of topics.
Yeah, both of those are low-quality.
As for physics, I was thinking more about this whose negative karma I have already commented on. In the meantime I have forgotten that the post managed to return to zero afterwards.
“Low-quality” is too general a justification to recognise the detailed reasons of downvotes. Among the more concrete criticisms I recall many “this is off-topic, hence my voting down” reactions. My memories may be subject to bias, of course, and I don’t want to spend time making a more reliable statistics. What I am feeling more certain about is, however, that there are many people who wish to keep all debates relevant to rationality, which effectively denotes an accidental set of topics, roughly {AI, charity donations, meta-ethics, evolution psychology, self-improvement, cognitive biases, Bayesian probability}. No doubt those topics are interesting, even for me. But not so much to keep me engaged after three (or how much exactly) years of LW’s existence. And since I disagree with many standard LW memes, I suppose there may be other potential “contrarians” (perhaps more willing to voice their disagreements than I am) becoming slowly disinterested for reasons similar to mine.
Yes, it’s sitting at +1 here and sitting at +2 at physics stackexchange. This supports the opposite of your view, suggesting that physics questions are almost as on-topic here as they are at physics stackexchange—which is surely too on-topic.
Wow. The first one is only at −2? That’s troubling. Ahh, nevermind.
Do we love going meta? Yes, we do.
Are we good at it? Sometimes yes, sometimes no; it also depends on the individual. But going meta is good for signalling intelligence, so we do it even when it’s just a waste of time.
Has it always been so? Yes, unpracticality and procrastination of many intelligent people is widely known.
The Akrasia you refer to is actually a feature, not a bug. Just picture the opposite: Intelligent people rushing to conclusions and caring more about getting stuff done instead of forsaking the urge to go with first answers and actually think.
My point is, we decry procrastination so much but the fact is it is good that we procrastinate, if we didn’t have this tendency we would be doers not thinkers. Not that I’m disparaging either, but you can’t rush math, or more generally deep, insightful thought, that way lies politics and insanity.
In an nutshell, perhaps we care more for thinking about things -or alternatively get a rush from the intellectual crack- so much that we don’t really want to act, or at least don’t want to act on incomplete knowledge, and hence the widespread procrastination, which given the alternative, is a very Good thing.
It seems to follow from this model that if we measure the tendency towards procrastination in two groups, one of which is selected for their demonstrable capability for math, or more generally for deep, insightful thought, and the other of which is not, we should find that the former group procrastinates more than the latter group.
Yes?
Yes & I’d modify that slightly to “the former group needs to more actively combat procrastination”.
Upvoted for not backing away from a concrete prediction.
I would be very surprised by that result.
Upvoted for good reasons for upvoting :)
For data, we could run a LW poll as a start and see. And out of curiosity, why would you be surprised?
Hm. You seem to have edited the comment after I responded to it, in such a way that makes me want to take back my response. How would we tell whether the former group needs to more actively combat procrastination?
I would be surprised because it’s significantly at odds with my experience of the relationship between procrastination and insight.
I have a habit of editing a comment for a bit after replying, actually I didn’t see your response until after editing, I don’t see how this changes your response in this instance though?
I added that caveat since the former group might have members who originally suffered more from procrastination as per the model, but eventually learned to deal with it, this might skew results if not taken into account.
It changes my response because while I kind of understand how to operationalize “group A procrastinates more than group B” I don’t quite understand how to operationalize “group A needs to more actively combat procrastination than group B.” Since what i was approving of was precisely the concreteness of the prediction, swapping it out for something I understand less concretely left me less approving.