You asked about emotional stuff so here is my perspective. I have extremely weird feelings about this whole forum that may affect my writing style. My view is constantly popping back and forth between different views, like in the rabbit-duck gestalt image. On one hand I often see interesting and very good arguments, but on the other hand I see tons of red flags popping up. I feel that I need to maintain extreme mental efforts to stay “sane” here. Maybe I should refrain from commenting. It’s a pity because I’m generally very interested in the topics discussed here, but the tone and the underlying ideology is pushing me away. On the other hand I feel an urge to check out the posts despite this effect. I’m not sure what aspect of certain forums have this psychological effect on my thinking, but I’ve felt it on various reddit communities as well.
On one hand I often see interesting and very good arguments, but on the other hand I see tons of red flags popping up. I feel that I need to maintain extreme mental efforts to stay “sane” here.
Seconded, actually, and it’s particular to LessWrong. I know I often joke that posting here gets treated as submitting academic material and skewered accordingly, but that is very much what it feels like from the inside. It feels like confronting a hostile crowd of, as Jonah put it, radical agnostics, every single time one posts, and they’re waiting for you to say something so they can jump down your throat about it.
Oh, and then you run into the issue of having radically different priors and beliefs, so that you find yourself on a “rationality” site where someone is suddenly using the term “global warming believer” as though the IPCC never issued multiple reports full of statistical evidence. I mean, sure, I can put some probability on, “It’s all a conspiracy and the official scientists are lying”, but for me that’s in the “nonsense zone”—I actually take offense to being asked to justify my belief in mainstream science.
As much as “good Bayesians” are never supposed to agree to disagree, I would very much like if people would be up-front about their priors and beliefs, so that we can both decide whether it’s worth the energy spent on long threads of trying to convince people of things.
Oh, and then you run into the issue of having radically different priors and beliefs, so that you find yourself on a “rationality” site where someone is suddenly using the term “global warming believer” as though the IPCC never issued multiple reports full of statistical evidence.
Rather bad statistical evidence I might add. Seriously, your argument amounts to an appeal to authority. Whatever happened to nullius in verba?
I mean, sure, I can put some probability on, “It’s all a conspiracy and the official scientists are lying”,
Some of them are, a lot of them were even caught when the climategate emails went public. Most of them, however, are some combination of ideologues and people who couldn’t handle the harder sciences and are now memorizing the teacher’s password, in other words a prospiracy. Add in what happens to climate journals that dare publish anything insufficiently alarmist and one gets the idea about the current state of climate science.
Appeal to Authority? Not in the normal sense that the IPCC exercises violent force, and I therefore designate them factually correct. No, it’s an Appeal to Expertise Outside My Own Domain. It’s me expecting that the same academic and scientific processes and methods that produced my expertise in my fields produced domain-experts in other fields with their own expertise, and that I can therefore trust in their findings about as thoroughly as I trust in my own.
Appeal to Authority? Not in the normal sense that the IPCC exercises violent force, and I therefore designate them factually correct.
That’s not the normal sense of appeal to authority, that would be appeal to force.
No, it’s an Appeal to Expertise Outside My Own Domain.
And how do you know that they’re actual experts? Because they (metaphorically) wear lab coats? That’s what appeal to authority is. While it’s not necessarily a fallacy, it’s notable that science started making progress as soon as people disavowed using it.
I haven’t tracked down the specific evidence—but muons are comparatively easy:
They live long enough to leave tracks in particle detectors with known magnetic fields.
That gives you the charge-to-mass ratio. Given that charge looks quantized (Milliken oil drop experiment
and umpteen repetitions), and there are other pieces of evidence from the particle tracks of muon
decay (and the electrons from that decay again leave tracks, and the angles are visible even
if the neutrinos aren’t) - I’d be surprised if the muon mass wasn’t pretty solid.
Assuming that both particle physicists and climatologists are doing things properly, that would only mean that the muon mass has much smaller error bars than the global warming (which it does), not that the former is more likely to be correct within its error bars.
Then again, it’s possible that climatologists are less likely to be doing things properly.
If you ask a physicist or an evolutionist why their beliefs are correct they will generally give you an answer (or at least start talking about the general principal). If you ask that question about climate science you’ll generally get either a direct appeal to authority or an indirect one: it’s all in this official report which I haven’t read but it’s official so it must be correct.
Heck climate scientists aren’t even that sparing about basic facts. They’ll mention that CO2 is a greenhouse gas, but avoid any more technical questions. For example, I only recently found out that (in the absence of other factors or any feedback) temperature is a logarithmic function of CO2 concentration.
Heck climate scientists aren’t even that sparing about basic facts. They’ll mention that CO2 is a greenhouse gas, but avoid any more technical questions. For example, I only recently found out that (in the absence of other factors or any feedback) temperature is a logarithmic function of CO2 concentration.
So this seems like you’ve never cracked open any climate/atmospheric science textbook? Because that is pretty basic info. It seems like you’re determined to be skeptical despite not really spending much time learning about the state of the science. Also it sounds like you are equivocating between “climate scientist” and “person on the internet who believes in global warming.”
My background is particle physics, if someone asked me about the mass of a muon, I’d have to make about a hundred appeals to authority to give them any relevant information, and I suspect climate scientists are in the same boat when talking to people who don’t understand some of the basics. I’ve personally engaged with special relativity crackpots who ask you to justify everything, and keep saying this or that basic fact from the field is an appeal to authority. There is no convincing a determined skeptic, so it’s best not to engage.
If you are near a university campus, wait until there is a technical talk on climate modelling and go sit and listen (don’t ask questions, just listen). You’ll probably be surprised at how vociferous the debate is- climate modelers are serious scientists working hard on perfecting their models.
Thanks so much for sharing. I’m astonished by how much more fruitful my relationships have became since I’ve started asking.
I think that a lot of what you’re seeing is a cultural clash: different communities have different blindspots and norms for communication, and a lot of times the combination of (i) blindspots of the communities that one is familiar with and (ii) respects in which a new community actually is unsound can give one the impression “these people are beyond the pale!” when the actual situation is that they’re no less rational than members of one’s own communities.
I had a very similar experience to your own coming from academia, and wrote a post titled The Importance of Self-Doubt in which I raised the concern that Less Wrong was functioning as a cult. But since then I’ve realized that a lot of the apparently weird beliefs on LWers are in fact also believed by very credible people: for example, Bill Gates recently expressed serious concern about AI risk.
If you’re new to the community, you’re probably unfamiliar with my own credentials which should reassure you somewhat:
I did a PhD in pure math under the direction of Nathan Dunfield, who coauthored papers with Bill Thurston, who formulated the geometrization conjecture which Perelman proved and in doing so won one of the Clay Millennium Problems.
I’ve been deeply involved with math education for highly gifted children for many years. I worked with the person who won the American Math Society prize for best undergraduate research when he was 12.
I worked at GiveWell, which partners with with Good Ventures, Dustin Moskovitz’s foundation.
I’ve done fullstack web development, making an asynchronous clone of StackOverflow (link).
I’ve done machine learning, rediscovering logistic regression, collaborative filtering, hierarchical modeling, the use of principal component analysis to deal with multicollinearity, and cross validation. (I found the expositions so poor that it was faster for me to work things out on my own than to learn from them, though I eventually learned the official versions).You can read some details of things that I found here. I did a project implementing Bayesian adjustment of Yelp restaurant star ratings using their public dataset here
So I imagine that I’m credible by your standards. There are other people involved in the community who you might find even more credible. For example: (a) Paul Christiano who was an international math olympiad medalist, wrote a 50 page paper on quantum computational complexity with Scott Aaronson as an undergraduate at MIT, and is a theoretical CS grad student at Berkeley. (b) Jacob Steinhardt, a Hertz graduate fellow who does machine learning research under Percy Liang at Stanford.
So you’re not actually in some sort of twilight zone. I share some of your concerns with the community, but the groupthink here is no stronger than the groupthink present in academia. I’d be happy to share my impressions of the relative soundness of the various LW community practices and beliefs.
There are other people involved in the community who you might find even more credible. For example: (a) Paul Christiano who was an international math olympiad medalist, wrote a 50 page paper on quantum computational complexity with Scott Aaronson as an undergraduate at MIT, and is a theoretical CS grad student at Berkeley. (b) Jacob Steinhardt, a Hertz graduate fellow who does machine learning research under Percy Liang at Stanford.
Of course, Christiano tends to issue disclaimers with his MIRI-branded AGI safety work, explicitly stating that he does not believe in alarmist UFAI scenarios. Which is fine, in itself, but it does show how people expect someone associated with these communities to sound.
And Jacob Steinhardt hasn’t exactly endorsed any “Twilight Zone” community norms or propaganda views. Errr, is there a term for “things everyone in a group thinks everyone else believes, whether or not they actually do”?
I’m not claiming otherwise: I’m merely saying that Paul and Jacob don’t dismiss LWers out of hand as obviously crazy, and have in fact found the community to be worthwhile enough to have participated substantially.
I think in this case we have to taboo the term “LWers” ;-). This community has many pieces in it, and two large parts of the original core are “techno-libertarian Overcoming Bias readers with many very non-mainstream beliefs that they claim are much more rational than anyone else’s beliefs” and “the SL4 mailing list wearing suits and trying to act professional enough that they might actually accomplish their Shock Level Four dreams.”
On the other hand, in the process of the site’s growth, it has eventually come to encompass those two demographics plus, to some limited extent, almost everyone who’s willing to assent that science, statistical reasoning, and the neuro/cognitive sciences actually really work and should be taken seriously. With special emphasis on statistical reasoning and cognitive sciences.
So the core demographic consists of Very Unusual People, but the periphery demographics, who now make up most of the community, consist of only Mildly Unusual People.
Those are indeed impressive things you did. I agree very much with your post from 2010. But the fact that many people have this initial impression shows that something is wrong. What makes it look like a “twilight zone”? Why don’t I feel the same symptoms for example on Scott Alexander’s Slate Star Codex blog?
Another thing I could pinpoint is that I don’t want to identify as a “rationalist”, I don’t want to be any -ist. It seems like a tactic to make people identify with a group and swallow “the whole package”. (I also don’t think people should identify as atheist either.)
In my experience there’s an issue of Less Wrongers being unusually emotionally damaged (e.g. relative to academics) and this gives rise to a lot of problems in the community. But I don’t think that the emotional damage primarily comes from the weird stuff that you see on Less Wrong. What one sees is them having born the brunt of the phenomenon that I described here disproportionately relative to other smart people, often because they’re unusually creative and have been marginalized by conformist norms
Quite frankly, I find the norms in academia very creepy: I’ve seen a lot of people develop serious mental health problems in connection with their experiences in academia. It’s hard to see it from the inside: I was disturbed by what I saw, but I didn’t realize that math academia is actually functioning as a cult, based on retrospective impressions, and in fact by implicit consensus of the best mathematicians of the world (I can give references if you’d like) .
I was disturbed by what I saw, but I didn’t realize that math academia is actually functioning as a cult
I’m sure you’re aware that the word “cult” is a strong claim that requires a lot of evidence, but I’d also issue a friendly warning that to me at least it immediately set off my “crank” alarm bells. I’ve seen too many Usenet posters who are sure they have a P=/!=NP proof, or a proof that set theory is false, or etc. who ultimately claim that because “the mathematical elite” are a cult that no one will listen to them. A cult generally engages in active suppression, often defamation, and not simply exclusion. Do you have evidence of legitimate mathematical results or research being hidden/withdrawn from journals or publicly derided, or is it more of an old boy’s club that’s hard for outsiders to participate in and that plays petty politics to the damage of the science?
Grothendieck’s problems look to be political and interpersonal. Perelman’s also. I think it’s one thing to claim that mathematical institutions are no more rational than any other politicized body, and quite another to claim that it’s a cult. Or maybe most social behavior is too cult-like. If so; perhaps don’t single out mathematics.
I’ve seen a lot of people develop serious mental health problems in connection with their experiences in academia.
I question the direction of causation. Historically many great mathematicians have been mentally and socially atypical and ended up not making much sense with their later writings. Either mathematics has always had an institutional problem or mathematicians have always had an incidence of mental difficulties (or a combination of both; but I would expect one to dominate).
Especially in Thurston’s On Proof and Progress in Mathematics I can appreciate the problem of trying to grok specialized areas of mathematics. The terminology and symbology is opaque to the uninitiated. It reminds me of section 1 of the Metamath Book which expresses similar unhappiness with the state of knowledge between specialist fields of mathematics and the general difficulty of learning mathematics. I had hoped that Metamath would become more popular and tie various subfields together through unifying theories and definitions, but as far as I can tell it languishes as a hobbyist project for a few dedicated mathematicians.
I’m sure you’re aware that the word “cult” is a strong claim that requires a lot of evidence, but I’d also issue a friendly warning that to me at least it immediately set off my “crank” alarm bells.
Thanks, yeah, people have been telling me that I need to be more careful in how I frame things. :-)
Do you have evidence of legitimate mathematical results or research being hidden/withdrawn from journals or publicly derided, or is it more of an old boy’s club that’s hard for outsiders to participate in and that plays petty politics to the damage of the science?
The latter, but note that that’s not necessarily less damaging than active suppression would be.
Or maybe most social behavior is too cult-like. If so; perhaps don’t single out mathematics.
Yes, this is what I believe. The math community is just unusually salient to me, but I should phrase things more carefully.
I question the direction of causation. Historically many great mathematicians have been mentally and socially atypical and ended up not making much sense with their later writings. Either mathematics has always had an institutional problem or mathematicians have always had an incidence of mental difficulties (or a combination of both; but I would expect one to dominate).
Most of the people who I have in mind did have preexisting difficulties. I meant something like “relative to a counterfactual where academia was serving its intended function.” People of very high intellectual curiosity sometimes approach academia believing that it will be an oasis and find this not to be at all the case, and that the structures in place are in fact hostile to them.
This is not what the government should be supporting with taxpayer dollars.
Especially in Thurston’s On Proof and Progress in Mathematics I can appreciate the problem of trying to grok specialized areas of mathematics.
The latter, but note that that’s not necessarily less damaging than active suppression would be.
I suppose there’s one scant anecdote for estimating this; cryptography research seemed to lag a decade or two behind actively suppressed/hidden government research. Granted, there was also less public interest in cryptography until the 80s or 90s, but it seems that suppression can only delay publication, not prevent it.
The real risk of suppression and exclusion both seem to be in permanently discouraging mathematicians who would otherwise make great breakthroughs, since affecting the timing of publication/discovery doesn’t seem as damaging.
This is not what the government should be supporting with taxpayer dollars.
I think I would be surprised if Basic Income was a less effective strategy than targeted government research funding.
What are your own interests?
Everything from logic and axiomatic foundations of mathematics to practical use of advanced theorems for computer science. What attracted me to Metamath was the idea that if I encountered a paper that was totally unintelligible to me (say Perelman’s proof of Poincaire’s conjecture or Wiles’ proof of Fermat’s Last Theorem) I could backtrack through sound definitions to concepts I already knew, and then build my understanding up from those definitions. Alas, just having a cross-reference of related definitions between various fields would be helpful. I take it that model theory is the place to look for such a cross-reference, and so that is probably the next thing I plan to study.
Practically, I realize that I don’t have enough time or patience or mental ability to slog through formal definitions all day, and so it would be nice to have something even better. A universal mathematical educator, so to speak. Although I worry that without a strong formal understanding I will miss important results/insights. So my other interest is building the kind of agent that can identify which formal insights are useful or important, which sort of naturally leads to an interest in AI and decision theory.
I would like to see some of those references (simply because I have no relation to Academia, and don’t like things I read somewhere to gestate into unfounded intuitions about a subject).
Quite frankly, I find the norms in academia very creepy: I’ve seen a lot of people develop serious mental health problems in connection with their experiences in academia. It’s hard to see it from the inside: I was disturbed by what I saw, but I didn’t realize that math academia is actually functioning as a cult, based on retrospective impressions, and in fact by implicit consensus of the best mathematicians of the world (I can give references if you’d like) .
I’ve only been in CS academia, and wouldn’t call that a cult. I would call it, like most of the rest of academia, a deeply dysfunctional industry in which to work, but that’s the fault of the academic career and funding structure. CS is even relatively healthy by comparison to much of the rest.
How much of our impression of mathematics as a creepy, mental-health-harming cult comes from pure stereotyping?
I was more positing that it’s a self-reinforcing, self-creating effect: people treat Mathematics in a cultish way because they think they’re supposed to.
For what its worth, I have observed a certain reverence in the way great mathematicians are treated by their lesser-accomplished colleagues that can often border on the creepy. This is something specific to math, in that it seems to exist in other disciplines with lesser intensity.
But I agree, “dysfunctional” seems to be a more apt label than “cult.” May I also add “fashion-prone?”
Finally, Alan Turing, the great Bletchley Park code breaker, father of computer science and homosexual, died trying to prove that some things are fundamentally unprovable.
This is a staggeringly wrong account of how he died.
I don’t have direct exposure to CS academia, which, as you comment, is known to be healthier :-). I was speaking in broad brushstrokes , I’ll qualify my claims and impressions more carefully later.
The top 3 answers to the MathOverflow question Which mathematicians have influenced you the most? are Alexander Grothendieck, Mikhail Gromov, and Bill Thurston. Each of these have expressed serious concerns about the community.
Grothendieck was actually effectively excommunicated by the mathematical community and then was pathologized as having gone crazy. See pages 37-40 of David Ruelle’s book A Mathematician’s Brain.
Gromov expresses strong sympathy for Grigory Perelman having left the mathematical community starting on page 110 of Perfect Rigor. (You can search for “Gromov” in the pdf to see all of his remarks on the subject.)
Thurston made very apt criticisms of the mathematical community in his essay On Proof and Progress In Mathematics. See especially the beginning of Section 3: “How is mathematical understanding communicated?” Terry Tao endorses Thurston’s essay in his obituary of Thurston. But the community has essentially ignored Thurston’s remarks: one almost never hears people talk about the points that Thurston raises.
I don’t know about Grothendieck, but the two other sources appear to have softer criticism of the mathematical community than “actually functioning as a cult”.
The links you give are extremely interesting, but, unless I am missing something, it seems that they fall short of justifying your earlier statement that math academia functions as a cult. I wonder if you would be willing to elaborate further on that?
The most scary thing to me is that the most mathematically talented students are often turned off by what they see in math classes, even at the undergraduate and graduate levels. Math serves as a backbone for the sciences, so this may badly undercutting scientific innovation at a societal level.
I honestly think that it would be an improvement on the status quo to stop teaching math classes entirely. Thurston characterized his early math education as follows:
I hated much of what was taught as mathematics in my early schooling, and I often received poor grades. I now view many of these early lessons as anti-math: they actively tried to discourage independent thought. One was supposed to follow an established pattern with mechanical precision, put answers inside boxes, and “show your work,” that is, reject mental insights and alternative approaches.
I think that this characterizes math classes even at the graduate level, only at a higher level of abstraction. The classes essentially never offer students exposure to free-form mathematical exploration, which is what it takes to make major scientific discoveries with significant quantitative components.
I distinctly remember having points taken off of a physics midterm because I didn’t show my work. I think I dropped the exam in the waste basket on the way out of the auditorium.
I’ve always assumed that the problem is three-fold; generating a formal proof is NP-hard, getting the right answer via shortcuts can include cheating, and the faculty’s time is limited. Professors/graders do not have the capacity to rigorously demonstrate to themselves that the steps a student has written down actually pinpoint the unique answer. Without access to the student’s mind graders are unable to determine if students cheat or not; being able to memorize and/or reproduce the exact steps of a calculation significantly decrease the likelihood of cheating. Even if graders could do one or both of the previous for a single student, they are not 30x or 100x as smart as their students, making it impractical to repeat the process for every student.
That said, I had some very good mathematics teachers in higher level courses who could force students to think, and one in particular who could encourage/demand novelty from students simply by asking them to solve problems that they hadn’t yet learned to solve. I didn’t realize the power of the latter approach until later (and at the time everyone complained about exams with a median score well under 50%), but his classes were always my favorite.
Thank you for all these interesting references. I enjoyed reading all of them, and rereading in Thurston’s case.
Do people pathologize Grothendieck as having gone crazy? I mostly think people think of him as being a little bit strange. The story I heard was that because of philosophical disagreements with military funding and personal conflicts with other mathematicians he left the community and was more or less refusing to speak to anyone about mathematics, and people were sad about this and wished he would come back.
Do people pathologize Grothendieck as having gone crazy?
His contribution of math is too great for people to have explicitly adopted a stance that was too unfavorable to him, and many mathematicians did in fact miss him a lot. But as Perelman said:
Of course, there are many mathematicians who are more or less honest. But almost all of them are conformists. They are more or less honest, but they tolerate those who are not honest.” He has also said that “It is not people who break ethical standards who are regarded as aliens. It is people like me who are isolated.
If pressed, many mathematicians downplay the role of those who behaved unethically toward him and the failure of the community to give him a job in favor of a narrative “poor guy, it’s so sad that he developed mental health problems.”
Another thing I could pinpoint is that I don’t want to identify as a “rationalist”, I don’t want to be any -ist.
I’ve always thought that calling yourself a “rationalist” or “aspiring rationalist” is rather useless. You’re either winning or not winning. Calling yourself by some funny term can give you the nice feeling of belonging to a community, but it doesn’t actually make you win more, in itself.
My view is constantly popping back and forth between different views
That sounds like you engage in binary thinking and don’t value shades of grey of uncertainty enough.
You feel to need to judge arguments for whether they are true or aren’t and don’t have mental categories for “might be true, or might not be true”.
Jonah makes strong claims for which he doesn’t provide evidence. He’s clear about the fact that he hasn’t provided the necessary evidence.
Given that you pattern match to “crackpot” instead of putting Jonah in the mental category where you don’t know whether what Jonah says is right or wrong.
If you start to put a lot of claims into the “I don’t know”-pile you don’t constantly pop between belief and non-belief. Popping back and forth means that the size of your updates when presented new evidence are too large.
Being able to say “I don’t know” is part of genuine skepticism.
I’m not talking about back and forth between true and false, but between two explanations. You can have a multimodal probability distribution and two distant modes are about equally probable, and when you update, sometimes one is larger and sometimes the other. Of course one doesn’t need to choose a point estimate (maximum a posteriori), the distribution itself should ideally be believed in its entirety. But just as you can’t see the rabbit-duck as simultaneously 50% rabbit and 50% duck, one sometimes switches between different explanations, similarly to an MCMC sampling procedure.
I don’t want to argue this too much because it’s largely a preference of style and culture. I think the discussions are very repetitive and it’s an illusion that there is much to be learned by spending so much time thinking meta.
I feel that I need to maintain extreme mental efforts to stay “sane” here. Maybe I should refrain from commenting. It’s a pity because I’m generally very interested in the topics discussed here, but the tone and the underlying ideology is pushing me away.
I would be very interested in hearing elaboration on this topic, either publicly or privately.
I prefer public discussions. First, I’m a computer science student who took courses in machine learning, AI, wrote theses in these areas (nothing exceptional), I enjoy books like Thinking Fast and Slow, Black Swan, Pinker, Dawkins, Dennett, Ramachandran etc. So the topics discussed here are also interesting to me. But the atmosphere seems quite closed and turning inwards.
I feel similarities to reddit’s Red Pill community. Previously “ignorant” people feel the community has opened a new world to them, they lived in darkness before, but now they found the “Way” (“Bayescraft”) and all this stuff is becoming an identity for them.
Sorry if it’s offensive, but I feel as if many people had no success in the “real world” matters and invented a fiction where they are the heroes by having joined some great organization much higher above the general public, who are just irrational automata still living in the dark.
I dislike the heavy use of insider terminology that make communication with “outsiders” about these ideas quite hard because you get used to referring to these things by the in-group terms, so you get kind of isolated from your real-life friends as you feel “they won’t understand, they’d have to read so much”. When actually many of the concepts are not all that new and could be phrased in a way that the “uninitiated” can also get it.
There are too many cross references in posts and it keeps you busy with the site longer than necessary. It seems that people try to prove they know some concept by using the jargon and including links to them. Instead, I’d prefer authors who actively try to minimize the need for links and jargon.
I also find the posts quite redundant. They seem to be reiterations of the same patterns in very long prose with people’s stories intertwined with the ideas, instead of striving for clarity and conciseness. Much of it feels a lot like self-help for people with derailed lives who try to engineer their life (back) to success. I may be wrong but I get a depressed vibe from reading the site too long. It may also be because there is no lighthearted humor or in-jokes or “fun” or self-irony at all. Maybe because the members are just like that in general (perhaps due to mental differences, like being on the autism spectrum, I’m not a psychiatrist).
I can see that people here are really smart and the comments are often very reasonable. And it makes me wonder why they’d regard a single person such as Yudkowsky in such high esteem as compared to established book authors or academics or industry people in these areas. I know there has been much discussion about cultishness, and I think it goes a lot deeper than surface issues. LessWrong seems to be quite isolated and distrusting towards the mainstream. Many people seem to have read stuff first from Yudkowsky, who often does not reference earlier works that basically state the same stuff, so people get the impression that all or most of the ideas in “The Sequences” come from him. I was quite disappointed several times when I found the same ideas in mainstream books. The Sequences often depict the whole outside world as dumber than it is (straw man tactics, etc).
Another thing is that discussion is often too meta (or meta-meta). There is discussion on Bayes theorem and math principles but no actual detailed, worked out stuff. Very little actual programming for example. I’d expect people to create github projects, IPython notebooks to show some examples of what they are talking about. Much of the meta-meta-discussion is very opinion-based because there is no immediate feedback about whether someone is wrong or right. It’s hard to test such hypotheses. For example, in this post I would have expected an example dataset and showing how PCA can uncover something surprising. Otherwise it’s just floating out there although it matches nicely with the pattern that “some math concept gave me insight that refined my rationality”. I’m not sure, maybe these “rationality improvements” are sometimes illusions.
I also don’t get why the rationality stuff is intermixed with friendly AI and cryonics and transhumanism. I just don’t see why these belong that much together. I find them too speculative and detached from the “real world” to be the central ideas. I realize they are important, but their prevalence could also be explained as “escapism” and it promotes the discussion of untestable meta things that I mentioned above, never having to face reality. There is much talk about what evidence is but not much talk that actually presents evidence.
I needed to develop a sort of immunity against topics like acausal trade that I can’t fully specify how they are wrong, but they feel wrong and are hard to translate to practical testable statements, and it just messes with my head in the wrong way.
And of course there is also that secrecy around and hiding of “certain things”.
That’s it. This place may just not be for me, which is fine. People can have their communities in the way they want. You just asked for elaboration.
Thanks for the detailed response! I’ll respond to a handful of points:
Previously “ignorant” people feel the community has opened a new world to them, they lived in darkness before, but now they found the “Way” (“Bayescraft”) and all this stuff is becoming an identity for them.
I certainly agree that there are people here who match that description, but it’s also worth pointing out that there are actual experts too.
the general public, who are just irrational automata still living in the dark.
One of the things I find most charming about LW, compared to places like RationalWiki, is how much emphasis there is on self-improvement and your mistakes, not mistakes made by other people because they’re dumb.
It seems that people try to prove they know some concept by using the jargon and including links to them. Instead, I’d prefer authors who actively try to minimize the need for links and jargon.
I’m not sure this is avoidable, and in full irony I’ll link to the wiki page that explains why.
In general, there are lots of concepts that seem useful, but the only way we have to refer to concepts is either to refer to a label or to explain the concept. A number of people read through the sequences and say “but the conclusions are just common sense!”, to which the response is, “yes, but how easy is it to communicate common sense?” It’s one thing to be able to recognize that there’s some vague problem, and another thing to be able to say “the problem here is inferential distance; knowledge takes many steps to explain, and attempts to explain it in fewer steps simply won’t work, and the justification for this potentially surprising claim is in Appendix A.” It is one thing to be able to recognize a concept as worthwhile; it is another thing to be able to recreate that concept when a need arises.
Now, I agree with you that having different labels to refer to the same concept, or conceptual boundaries or definitions that are drawn slightly differently, is a giant pain. When possible, I try to bring the wider community’s terminology to LW, but this requires being in both communities, which limits how much any individual person can do.
I also don’t get why the rationality stuff is intermixed with friendly AI and cryonics and transhumanism.
Part of that is just seeding effects—if you start a rationality site with a bunch of people interested in transhumanism, the site will remain disproportionately linked to transhumanism because people who aren’t transhumanists will be more likely to leave and people who are transhumanists will be more likely to find and join the site.
Part of it is that those are the cluster of ideas that seem weird but ‘hold up’ under investigation—most of the reasons to believe that the economy of fifty years from now will look like the economy of today are just confused, and if a community has good tools for dissolving confusions you should expect them to converge on the un-confused answer.
A final part seems to be availability; people who are convinced by the case for cryonics tend to be louder than the people who are unconvinced. The annual surveys show the perception of LW one gets from just reading posts (or posts and comments) is skewed from the perception of LW one gets from the survey results.
One of the things I find most charming about LW, compared to places like RationalWiki, is how much emphasis there is on self-improvement and your mistakes, not mistakes made by other people because they’re dumb.
I agree that LW is much better than RationalWiki, but I still think that the norms for discussion are much too far in the direction of focus on how other commenters are wrong as opposed to how one might oneself be wrong.
I know that there’s a selection effect (with respect to the more frustrating interactions standing out). But people not infrequently mistakenly believe that I’m wrong about things that I know much more about than they do, with very high confidence, and in such instances I find the connotations that I’m unsound to be exasperating.
I don’t think that this is just a problem for me rather than a problem for the community in general: I know a number of very high quality thinkers in real life who are uninterested in participating on LW explicitly because they don’t want to engage with commenters who are highly confident that their own positions are incorrect. There’s another selection effect here: such people aren’t salient because they’re invisible to the online community.
I know that there’s a selection effect (with respect to the more frustrating interactions standing out).
I agree that those frustrating interactions both happen and are frustrating, and that it leads to a general acidification of the discussion as people who don’t want to deal with it leave. Reversing that process in a sustainable way is probably the most valuable way to improve LW in the medium term.
There’s also the whole Lesswrong-is-dying thing that might be contribute to the vibe you’re getting. I’ve been reading the forum for years and it hasn’t felt very healthy for a while now. A lot of the impressive people from earlier have moved on, we don’t seem to be getting that many new impressive people coming in and hanging out a lot on the forum turns out not to make you that much more impressive. What’s left is turning increasingly into a weird sort of cargo cult of a forum for impressive people.
Actually, I think that LessWrong used to be worse when the “impressive people” were posting about cryonics, FAI, many-world interpretation of quantum mechanics, and so on.
It has seemed to me that a lot of the commenters who come with their own solid competency are also less likely to get unquestioningly swept away following EY’s particular hobbyhorses.
I needed to develop a sort of immunity against topics like acausal trade that I can’t fully specify how they are wrong, but they feel wrong and are hard to translate to practical testable statements, and it just messes with my head in the wrong way.
The applicable word is metaphysics. Acausal trade is dabbling in metaphysics to “solve” a question in decision theory, which is itself mere philosophizing, and thus one has to wonder: what does Nature care for philosophies?
By the way, for the rest of your post I was going, “OH MY GOD I KNOW YOUR FEELS, MAN!” So it’s not as though nobody ever thinks these things. Those of us who do just tend to, in perfect evaporative cooling fashion, go get on with our lives outside this website, being relatively ordinary science nerds.
Sorry avoiding metaphysics doesn’t work. You just end up either reinventing them (badly) or using a bad 5th hand version of some old philospher’s metaphysics. Incidentally, Eliezer also tried avoiding metaphysics and wound up doing the former.
I don’t like Eliezer’s apparent mathematical/computational Platonism myself, but most working scientists manage to avoid metaphysical buggery by simply dealing with only those things with which what they can actually causally interact. I recall an Eliezer post on “Explain/Worship/Ignore”, and would add myself that while “Explain” eventually bottoms out in the limits of our current knowledge, the correct response is to hit “Ignore” at that stage, not to drop to one’s knees in Worship of a Sacred Mystery that is in fact just a limit to current evidence.
EDIT: This is also one of the reasons I enjoy being in this community: even when I disagree with someone’s view (eg: Eliezer’s), people here (including him) are often more productive and fun to talk to than someone who hits the limits of their scientific knowledge and just throws their hands up to the tune of “METAPHYSICS, SON!”, and then joins the bloody Catholic Church, as if that solved anything.
I don’t like Eliezer’s apparent mathematical/computational Platonism myself, but most working scientists manage to avoid metaphysical buggery by simply dealing with only those things with which what they can actually causally interact.
That works up until the point where you actually have to think about what it means to “causally interact” with something. Also questions like “does something that falls into a black hole cease to exist since it’s no longer possible to interact with it”?
Also questions like “does something that falls into a black hole cease to exist since it’s no longer possible to interact with it”?
But there are trivially easy answers to questions like that. Basically you have to ask “Cease to exist for whom?” i.e. it obviously ceases to exist for you. You just have to taboo words like “really” here such “does it really cease to exist” as they are meaningless, they don’t lead to predictions. What often people consider “really” reality is the perception of a perfect god-like omniscient observer but there is no such thing.
Essentially there are just two extremes to avoid, the po-mo “nothing is real, everything is mere perception” and the traditional, classical “but how things really really REALLY are?” and the middle way here is “reality is the sum of what could be perceived in principle”. A perception is right or wrong based on how much it meshes with all the other things that can in principle be perceived. Everything that cannot even be perceived in theory is not part of reality. There is no how things “really” are, the closest we have to that what is the sum of all potential, possible perceivables about a thing.
I picked up this approach from Eric S. Raymond, I think he worked it out decades before Eliezer did, possibly both working from Peirce.
I don’t know what real-for-me means here. Everything that in principle, in theory, could be observed, is real. Most of those you didn’t. This does not make them any less real.
I meant the “for whom?” not in the sense of me, you, or the barkeeper down the street. I meant it in the sense of normal beings who know only things that are in principle knowable, vs. some godlike being who can know how things really “are” regardless of whether they are knowable or not.
Everything that in principle, in theory, could be observed, is real.
Well, that’s where it starts to break down; because what you can, in theory, observe is different from what I can, in theory, observe.
This is because, as far as anyone can tell, observations are limited by the speed of light. I cannot, even in principle, observe the 2015 Alpha Centauri until at least 2019 (if I observe it now, I am seeing light that left it around 2011). If Alpha Centauri had suddenly exploded in 2013, I have no way of observing that until at least 2018 - even in principle.
So if the barkeeper, instead of being down the street, is rather living on a planet orbiting Alpha Centauri, then the set of what he can observe in principle is not the same as the set of what I can observe in principle.
Physicists are not very precise about it, may I suggest looking into “potential outcomes” (the language some statisticians use to talk about counterfactuals):
Potential outcomes let you think about a model that contains a random variable for what happens to Fred if we give Fred aspirin, and a random variable for what happens to Fred if we give Fred placebo. Even though in reality we only gave Fred aspirin. This is “counterfactual definiteness” in statistics.
This paper uses potential outcomes to talk about outcomes of physics experiments (so there is an exact isomorphism between counterfactuals in physics and potential outcomes):
Sounds like this is perhaps related to the counterfactual-consistency statement? In its simple form, that the counterfactual or potential outcome under policy “a” equals the factual observed outcome when you in fact undertake policy “a”, or formally, Y^a = Y when A = a.
No, not quite. Counterfactual consistency is what allows you to link observed and hypothetical data (so it is also extremely important). Counterfactual definiteness is even more basic than that. It basically sets the size of your ontology by allowing you to talk about Y(a) and Y(a’) together, even if we only observe Y under one value of A.
edit: Stephen, I think I realized who you are, please accept my apologies if I seemed to be talking down to you, re: potential outcomes, that was not my intention. My prior is people do not know what potential outcomes are.
edit 2: Good talks by Richard Gill and Jamie Robins at JSM on this:
I just need to translate that for him to street lingo.
“There is shit we know, shit we could know, and shit could not know no matter how good tech we had, we could not even know the effects it has on other stuff. So why should we say this later stuff exists? Or why should we say this does not exist? We cannot prove either.”
My serious point is that one cannot avoid metaphysics, and that way too many people start out from “all this metaphysics stuff is BS, I’ll just use common sense” and end up with there own (bad) counter-intuitive metaphysical theory that they insist is “not metaphysics”.
You could charitably understand everything that such people (who assert that metaphysics is BS) say with a silent “up to empirical equivalence”. Doesn’t the problem disappear then?
You asked about emotional stuff so here is my perspective. I have extremely weird feelings about this whole forum that may affect my writing style. My view is constantly popping back and forth between different views, like in the rabbit-duck gestalt image. On one hand I often see interesting and very good arguments, but on the other hand I see tons of red flags popping up. I feel that I need to maintain extreme mental efforts to stay “sane” here. Maybe I should refrain from commenting. It’s a pity because I’m generally very interested in the topics discussed here, but the tone and the underlying ideology is pushing me away. On the other hand I feel an urge to check out the posts despite this effect. I’m not sure what aspect of certain forums have this psychological effect on my thinking, but I’ve felt it on various reddit communities as well.
Seconded, actually, and it’s particular to LessWrong. I know I often joke that posting here gets treated as submitting academic material and skewered accordingly, but that is very much what it feels like from the inside. It feels like confronting a hostile crowd of, as Jonah put it, radical agnostics, every single time one posts, and they’re waiting for you to say something so they can jump down your throat about it.
Oh, and then you run into the issue of having radically different priors and beliefs, so that you find yourself on a “rationality” site where someone is suddenly using the term “global warming believer” as though the IPCC never issued multiple reports full of statistical evidence. I mean, sure, I can put some probability on, “It’s all a conspiracy and the official scientists are lying”, but for me that’s in the “nonsense zone”—I actually take offense to being asked to justify my belief in mainstream science.
As much as “good Bayesians” are never supposed to agree to disagree, I would very much like if people would be up-front about their priors and beliefs, so that we can both decide whether it’s worth the energy spent on long threads of trying to convince people of things.
Rather bad statistical evidence I might add. Seriously, your argument amounts to an appeal to authority. Whatever happened to nullius in verba?
Some of them are, a lot of them were even caught when the climategate emails went public. Most of them, however, are some combination of ideologues and people who couldn’t handle the harder sciences and are now memorizing the teacher’s password, in other words a prospiracy. Add in what happens to climate journals that dare publish anything insufficiently alarmist and one gets the idea about the current state of climate science.
Appeal to Authority? Not in the normal sense that the IPCC exercises violent force, and I therefore designate them factually correct. No, it’s an Appeal to Expertise Outside My Own Domain. It’s me expecting that the same academic and scientific processes and methods that produced my expertise in my fields produced domain-experts in other fields with their own expertise, and that I can therefore trust in their findings about as thoroughly as I trust in my own.
That’s not the normal sense of appeal to authority, that would be appeal to force.
And how do you know that they’re actual experts? Because they (metaphorically) wear lab coats? That’s what appeal to authority is. While it’s not necessarily a fallacy, it’s notable that science started making progress as soon as people disavowed using it.
Do you believe that the mass of the muon as listed by the Particle Data Group is at least approximately correct? If so, why?
I haven’t tracked down the specific evidence—but muons are comparatively easy: They live long enough to leave tracks in particle detectors with known magnetic fields. That gives you the charge-to-mass ratio. Given that charge looks quantized (Milliken oil drop experiment and umpteen repetitions), and there are other pieces of evidence from the particle tracks of muon decay (and the electrons from that decay again leave tracks, and the angles are visible even if the neutrinos aren’t) - I’d be surprised if the muon mass wasn’t pretty solid.
Assuming that both particle physicists and climatologists are doing things properly, that would only mean that the muon mass has much smaller error bars than the global warming (which it does), not that the former is more likely to be correct within its error bars.
Then again, it’s possible that climatologists are less likely to be doing things properly.
If you ask a physicist or an evolutionist why their beliefs are correct they will generally give you an answer (or at least start talking about the general principal). If you ask that question about climate science you’ll generally get either a direct appeal to authority or an indirect one: it’s all in this official report which I haven’t read but it’s official so it must be correct.
Heck climate scientists aren’t even that sparing about basic facts. They’ll mention that CO2 is a greenhouse gas, but avoid any more technical questions. For example, I only recently found out that (in the absence of other factors or any feedback) temperature is a logarithmic function of CO2 concentration.
So this seems like you’ve never cracked open any climate/atmospheric science textbook? Because that is pretty basic info. It seems like you’re determined to be skeptical despite not really spending much time learning about the state of the science. Also it sounds like you are equivocating between “climate scientist” and “person on the internet who believes in global warming.”
My background is particle physics, if someone asked me about the mass of a muon, I’d have to make about a hundred appeals to authority to give them any relevant information, and I suspect climate scientists are in the same boat when talking to people who don’t understand some of the basics. I’ve personally engaged with special relativity crackpots who ask you to justify everything, and keep saying this or that basic fact from the field is an appeal to authority. There is no convincing a determined skeptic, so it’s best not to engage.
If you are near a university campus, wait until there is a technical talk on climate modelling and go sit and listen (don’t ask questions, just listen). You’ll probably be surprised at how vociferous the debate is- climate modelers are serious scientists working hard on perfecting their models.
Thanks so much for sharing. I’m astonished by how much more fruitful my relationships have became since I’ve started asking.
I think that a lot of what you’re seeing is a cultural clash: different communities have different blindspots and norms for communication, and a lot of times the combination of (i) blindspots of the communities that one is familiar with and (ii) respects in which a new community actually is unsound can give one the impression “these people are beyond the pale!” when the actual situation is that they’re no less rational than members of one’s own communities.
I had a very similar experience to your own coming from academia, and wrote a post titled The Importance of Self-Doubt in which I raised the concern that Less Wrong was functioning as a cult. But since then I’ve realized that a lot of the apparently weird beliefs on LWers are in fact also believed by very credible people: for example, Bill Gates recently expressed serious concern about AI risk.
If you’re new to the community, you’re probably unfamiliar with my own credentials which should reassure you somewhat:
I did a PhD in pure math under the direction of Nathan Dunfield, who coauthored papers with Bill Thurston, who formulated the geometrization conjecture which Perelman proved and in doing so won one of the Clay Millennium Problems.
I’ve been deeply involved with math education for highly gifted children for many years. I worked with the person who won the American Math Society prize for best undergraduate research when he was 12.
I worked at GiveWell, which partners with with Good Ventures, Dustin Moskovitz’s foundation.
I’ve done fullstack web development, making an asynchronous clone of StackOverflow (link).
I’ve done machine learning, rediscovering logistic regression, collaborative filtering, hierarchical modeling, the use of principal component analysis to deal with multicollinearity, and cross validation. (I found the expositions so poor that it was faster for me to work things out on my own than to learn from them, though I eventually learned the official versions).You can read some details of things that I found here. I did a project implementing Bayesian adjustment of Yelp restaurant star ratings using their public dataset here
So I imagine that I’m credible by your standards. There are other people involved in the community who you might find even more credible. For example: (a) Paul Christiano who was an international math olympiad medalist, wrote a 50 page paper on quantum computational complexity with Scott Aaronson as an undergraduate at MIT, and is a theoretical CS grad student at Berkeley. (b) Jacob Steinhardt, a Hertz graduate fellow who does machine learning research under Percy Liang at Stanford.
So you’re not actually in some sort of twilight zone. I share some of your concerns with the community, but the groupthink here is no stronger than the groupthink present in academia. I’d be happy to share my impressions of the relative soundness of the various LW community practices and beliefs.
Of course, Christiano tends to issue disclaimers with his MIRI-branded AGI safety work, explicitly stating that he does not believe in alarmist UFAI scenarios. Which is fine, in itself, but it does show how people expect someone associated with these communities to sound.
And Jacob Steinhardt hasn’t exactly endorsed any “Twilight Zone” community norms or propaganda views. Errr, is there a term for “things everyone in a group thinks everyone else believes, whether or not they actually do”?
I’m not claiming otherwise: I’m merely saying that Paul and Jacob don’t dismiss LWers out of hand as obviously crazy, and have in fact found the community to be worthwhile enough to have participated substantially.
I think in this case we have to taboo the term “LWers” ;-). This community has many pieces in it, and two large parts of the original core are “techno-libertarian Overcoming Bias readers with many very non-mainstream beliefs that they claim are much more rational than anyone else’s beliefs” and “the SL4 mailing list wearing suits and trying to act professional enough that they might actually accomplish their Shock Level Four dreams.”
On the other hand, in the process of the site’s growth, it has eventually come to encompass those two demographics plus, to some limited extent, almost everyone who’s willing to assent that science, statistical reasoning, and the neuro/cognitive sciences actually really work and should be taken seriously. With special emphasis on statistical reasoning and cognitive sciences.
So the core demographic consists of Very Unusual People, but the periphery demographics, who now make up most of the community, consist of only Mildly Unusual People.
Yes, this seems like a fair assessment o the situation. Thanks for disentangling the issues. I’ll be more precise in the future.
Those are indeed impressive things you did. I agree very much with your post from 2010. But the fact that many people have this initial impression shows that something is wrong. What makes it look like a “twilight zone”? Why don’t I feel the same symptoms for example on Scott Alexander’s Slate Star Codex blog?
Another thing I could pinpoint is that I don’t want to identify as a “rationalist”, I don’t want to be any -ist. It seems like a tactic to make people identify with a group and swallow “the whole package”. (I also don’t think people should identify as atheist either.)
I’m sympathetic to everything you say.
In my experience there’s an issue of Less Wrongers being unusually emotionally damaged (e.g. relative to academics) and this gives rise to a lot of problems in the community. But I don’t think that the emotional damage primarily comes from the weird stuff that you see on Less Wrong. What one sees is them having born the brunt of the phenomenon that I described here disproportionately relative to other smart people, often because they’re unusually creative and have been marginalized by conformist norms
Quite frankly, I find the norms in academia very creepy: I’ve seen a lot of people develop serious mental health problems in connection with their experiences in academia. It’s hard to see it from the inside: I was disturbed by what I saw, but I didn’t realize that math academia is actually functioning as a cult, based on retrospective impressions, and in fact by implicit consensus of the best mathematicians of the world (I can give references if you’d like) .
I’m sure you’re aware that the word “cult” is a strong claim that requires a lot of evidence, but I’d also issue a friendly warning that to me at least it immediately set off my “crank” alarm bells. I’ve seen too many Usenet posters who are sure they have a P=/!=NP proof, or a proof that set theory is false, or etc. who ultimately claim that because “the mathematical elite” are a cult that no one will listen to them. A cult generally engages in active suppression, often defamation, and not simply exclusion. Do you have evidence of legitimate mathematical results or research being hidden/withdrawn from journals or publicly derided, or is it more of an old boy’s club that’s hard for outsiders to participate in and that plays petty politics to the damage of the science?
Grothendieck’s problems look to be political and interpersonal. Perelman’s also. I think it’s one thing to claim that mathematical institutions are no more rational than any other politicized body, and quite another to claim that it’s a cult. Or maybe most social behavior is too cult-like. If so; perhaps don’t single out mathematics.
I question the direction of causation. Historically many great mathematicians have been mentally and socially atypical and ended up not making much sense with their later writings. Either mathematics has always had an institutional problem or mathematicians have always had an incidence of mental difficulties (or a combination of both; but I would expect one to dominate).
Especially in Thurston’s On Proof and Progress in Mathematics I can appreciate the problem of trying to grok specialized areas of mathematics. The terminology and symbology is opaque to the uninitiated. It reminds me of section 1 of the Metamath Book which expresses similar unhappiness with the state of knowledge between specialist fields of mathematics and the general difficulty of learning mathematics. I had hoped that Metamath would become more popular and tie various subfields together through unifying theories and definitions, but as far as I can tell it languishes as a hobbyist project for a few dedicated mathematicians.
Thanks, yeah, people have been telling me that I need to be more careful in how I frame things. :-)
The latter, but note that that’s not necessarily less damaging than active suppression would be.
Yes, this is what I believe. The math community is just unusually salient to me, but I should phrase things more carefully.
Most of the people who I have in mind did have preexisting difficulties. I meant something like “relative to a counterfactual where academia was serving its intended function.” People of very high intellectual curiosity sometimes approach academia believing that it will be an oasis and find this not to be at all the case, and that the structures in place are in fact hostile to them.
This is not what the government should be supporting with taxpayer dollars.
What are your own interests?
I suppose there’s one scant anecdote for estimating this; cryptography research seemed to lag a decade or two behind actively suppressed/hidden government research. Granted, there was also less public interest in cryptography until the 80s or 90s, but it seems that suppression can only delay publication, not prevent it.
The real risk of suppression and exclusion both seem to be in permanently discouraging mathematicians who would otherwise make great breakthroughs, since affecting the timing of publication/discovery doesn’t seem as damaging.
I think I would be surprised if Basic Income was a less effective strategy than targeted government research funding.
Everything from logic and axiomatic foundations of mathematics to practical use of advanced theorems for computer science. What attracted me to Metamath was the idea that if I encountered a paper that was totally unintelligible to me (say Perelman’s proof of Poincaire’s conjecture or Wiles’ proof of Fermat’s Last Theorem) I could backtrack through sound definitions to concepts I already knew, and then build my understanding up from those definitions. Alas, just having a cross-reference of related definitions between various fields would be helpful. I take it that model theory is the place to look for such a cross-reference, and so that is probably the next thing I plan to study.
Practically, I realize that I don’t have enough time or patience or mental ability to slog through formal definitions all day, and so it would be nice to have something even better. A universal mathematical educator, so to speak. Although I worry that without a strong formal understanding I will miss important results/insights. So my other interest is building the kind of agent that can identify which formal insights are useful or important, which sort of naturally leads to an interest in AI and decision theory.
I would like to see some of those references (simply because I have no relation to Academia, and don’t like things I read somewhere to gestate into unfounded intuitions about a subject).
I’ve only been in CS academia, and wouldn’t call that a cult. I would call it, like most of the rest of academia, a deeply dysfunctional industry in which to work, but that’s the fault of the academic career and funding structure. CS is even relatively healthy by comparison to much of the rest.
How much of our impression of mathematics as a creepy, mental-health-harming cult comes from pure stereotyping?
Jonah happens to be a math phd. How can you engage in pure stereotyping of mathematicians while you get your PHD?
I was more positing that it’s a self-reinforcing, self-creating effect: people treat Mathematics in a cultish way because they think they’re supposed to.
I don’t believe there’s any such thing, on the general grounds of “no fake without a reality to be a fake of.”
Who do you mean when you say “people”?
For what its worth, I have observed a certain reverence in the way great mathematicians are treated by their lesser-accomplished colleagues that can often border on the creepy. This is something specific to math, in that it seems to exist in other disciplines with lesser intensity.
But I agree, “dysfunctional” seems to be a more apt label than “cult.” May I also add “fashion-prone?”
Er, what? Who do you mean by “we”?
The link says of Turing:
This is a staggeringly wrong account of how he died.
Hence my calling it “pure stereotyping”!
I don’t have direct exposure to CS academia, which, as you comment, is known to be healthier :-). I was speaking in broad brushstrokes , I’ll qualify my claims and impressions more carefully later.
I don’t really understand what you mean about math academia. Those references would be appreciated.
The top 3 answers to the MathOverflow question Which mathematicians have influenced you the most? are Alexander Grothendieck, Mikhail Gromov, and Bill Thurston. Each of these have expressed serious concerns about the community.
Grothendieck was actually effectively excommunicated by the mathematical community and then was pathologized as having gone crazy. See pages 37-40 of David Ruelle’s book A Mathematician’s Brain.
Gromov expresses strong sympathy for Grigory Perelman having left the mathematical community starting on page 110 of Perfect Rigor. (You can search for “Gromov” in the pdf to see all of his remarks on the subject.)
Thurston made very apt criticisms of the mathematical community in his essay On Proof and Progress In Mathematics. See especially the beginning of Section 3: “How is mathematical understanding communicated?” Terry Tao endorses Thurston’s essay in his obituary of Thurston. But the community has essentially ignored Thurston’s remarks: one almost never hears people talk about the points that Thurston raises.
I don’t know about Grothendieck, but the two other sources appear to have softer criticism of the mathematical community than “actually functioning as a cult”.
The links you give are extremely interesting, but, unless I am missing something, it seems that they fall short of justifying your earlier statement that math academia functions as a cult. I wonder if you would be willing to elaborate further on that?
I’ll be writing more about this later.
The most scary thing to me is that the most mathematically talented students are often turned off by what they see in math classes, even at the undergraduate and graduate levels. Math serves as a backbone for the sciences, so this may badly undercutting scientific innovation at a societal level.
I honestly think that it would be an improvement on the status quo to stop teaching math classes entirely. Thurston characterized his early math education as follows:
I hated much of what was taught as mathematics in my early schooling, and I often received poor grades. I now view many of these early lessons as anti-math: they actively tried to discourage independent thought. One was supposed to follow an established pattern with mechanical precision, put answers inside boxes, and “show your work,” that is, reject mental insights and alternative approaches.
I think that this characterizes math classes even at the graduate level, only at a higher level of abstraction. The classes essentially never offer students exposure to free-form mathematical exploration, which is what it takes to make major scientific discoveries with significant quantitative components.
I distinctly remember having points taken off of a physics midterm because I didn’t show my work. I think I dropped the exam in the waste basket on the way out of the auditorium.
I’ve always assumed that the problem is three-fold; generating a formal proof is NP-hard, getting the right answer via shortcuts can include cheating, and the faculty’s time is limited. Professors/graders do not have the capacity to rigorously demonstrate to themselves that the steps a student has written down actually pinpoint the unique answer. Without access to the student’s mind graders are unable to determine if students cheat or not; being able to memorize and/or reproduce the exact steps of a calculation significantly decrease the likelihood of cheating. Even if graders could do one or both of the previous for a single student, they are not 30x or 100x as smart as their students, making it impractical to repeat the process for every student.
That said, I had some very good mathematics teachers in higher level courses who could force students to think, and one in particular who could encourage/demand novelty from students simply by asking them to solve problems that they hadn’t yet learned to solve. I didn’t realize the power of the latter approach until later (and at the time everyone complained about exams with a median score well under 50%), but his classes were always my favorite.
Thank you for all these interesting references. I enjoyed reading all of them, and rereading in Thurston’s case.
Do people pathologize Grothendieck as having gone crazy? I mostly think people think of him as being a little bit strange. The story I heard was that because of philosophical disagreements with military funding and personal conflicts with other mathematicians he left the community and was more or less refusing to speak to anyone about mathematics, and people were sad about this and wished he would come back.
His contribution of math is too great for people to have explicitly adopted a stance that was too unfavorable to him, and many mathematicians did in fact miss him a lot. But as Perelman said:
Of course, there are many mathematicians who are more or less honest. But almost all of them are conformists. They are more or less honest, but they tolerate those who are not honest.” He has also said that “It is not people who break ethical standards who are regarded as aliens. It is people like me who are isolated.
If pressed, many mathematicians downplay the role of those who behaved unethically toward him and the failure of the community to give him a job in favor of a narrative “poor guy, it’s so sad that he developed mental health problems.”
What failure? He stepped down from the Steklov Institute and has refused every job offer and prize given to him.
From the details I’m aware of “gone crazy” is not a bad description of what happened.
Nobody forces you to do so. Plenty of people in this community don’t self identify that way.
I’ve always thought that calling yourself a “rationalist” or “aspiring rationalist” is rather useless. You’re either winning or not winning. Calling yourself by some funny term can give you the nice feeling of belonging to a community, but it doesn’t actually make you win more, in itself.
That sounds like you engage in binary thinking and don’t value shades of grey of uncertainty enough. You feel to need to judge arguments for whether they are true or aren’t and don’t have mental categories for “might be true, or might not be true”.
Jonah makes strong claims for which he doesn’t provide evidence. He’s clear about the fact that he hasn’t provided the necessary evidence.
Given that you pattern match to “crackpot” instead of putting Jonah in the mental category where you don’t know whether what Jonah says is right or wrong. If you start to put a lot of claims into the “I don’t know”-pile you don’t constantly pop between belief and non-belief. Popping back and forth means that the size of your updates when presented new evidence are too large.
Being able to say “I don’t know” is part of genuine skepticism.
I’m not talking about back and forth between true and false, but between two explanations. You can have a multimodal probability distribution and two distant modes are about equally probable, and when you update, sometimes one is larger and sometimes the other. Of course one doesn’t need to choose a point estimate (maximum a posteriori), the distribution itself should ideally be believed in its entirety. But just as you can’t see the rabbit-duck as simultaneously 50% rabbit and 50% duck, one sometimes switches between different explanations, similarly to an MCMC sampling procedure.
I don’t want to argue this too much because it’s largely a preference of style and culture. I think the discussions are very repetitive and it’s an illusion that there is much to be learned by spending so much time thinking meta.
Anyway, I evaporate from the site for now.
I would be very interested in hearing elaboration on this topic, either publicly or privately.
I prefer public discussions. First, I’m a computer science student who took courses in machine learning, AI, wrote theses in these areas (nothing exceptional), I enjoy books like Thinking Fast and Slow, Black Swan, Pinker, Dawkins, Dennett, Ramachandran etc. So the topics discussed here are also interesting to me. But the atmosphere seems quite closed and turning inwards.
I feel similarities to reddit’s Red Pill community. Previously “ignorant” people feel the community has opened a new world to them, they lived in darkness before, but now they found the “Way” (“Bayescraft”) and all this stuff is becoming an identity for them.
Sorry if it’s offensive, but I feel as if many people had no success in the “real world” matters and invented a fiction where they are the heroes by having joined some great organization much higher above the general public, who are just irrational automata still living in the dark.
I dislike the heavy use of insider terminology that make communication with “outsiders” about these ideas quite hard because you get used to referring to these things by the in-group terms, so you get kind of isolated from your real-life friends as you feel “they won’t understand, they’d have to read so much”. When actually many of the concepts are not all that new and could be phrased in a way that the “uninitiated” can also get it.
There are too many cross references in posts and it keeps you busy with the site longer than necessary. It seems that people try to prove they know some concept by using the jargon and including links to them. Instead, I’d prefer authors who actively try to minimize the need for links and jargon.
I also find the posts quite redundant. They seem to be reiterations of the same patterns in very long prose with people’s stories intertwined with the ideas, instead of striving for clarity and conciseness. Much of it feels a lot like self-help for people with derailed lives who try to engineer their life (back) to success. I may be wrong but I get a depressed vibe from reading the site too long. It may also be because there is no lighthearted humor or in-jokes or “fun” or self-irony at all. Maybe because the members are just like that in general (perhaps due to mental differences, like being on the autism spectrum, I’m not a psychiatrist).
I can see that people here are really smart and the comments are often very reasonable. And it makes me wonder why they’d regard a single person such as Yudkowsky in such high esteem as compared to established book authors or academics or industry people in these areas. I know there has been much discussion about cultishness, and I think it goes a lot deeper than surface issues. LessWrong seems to be quite isolated and distrusting towards the mainstream. Many people seem to have read stuff first from Yudkowsky, who often does not reference earlier works that basically state the same stuff, so people get the impression that all or most of the ideas in “The Sequences” come from him. I was quite disappointed several times when I found the same ideas in mainstream books. The Sequences often depict the whole outside world as dumber than it is (straw man tactics, etc).
Another thing is that discussion is often too meta (or meta-meta). There is discussion on Bayes theorem and math principles but no actual detailed, worked out stuff. Very little actual programming for example. I’d expect people to create github projects, IPython notebooks to show some examples of what they are talking about. Much of the meta-meta-discussion is very opinion-based because there is no immediate feedback about whether someone is wrong or right. It’s hard to test such hypotheses. For example, in this post I would have expected an example dataset and showing how PCA can uncover something surprising. Otherwise it’s just floating out there although it matches nicely with the pattern that “some math concept gave me insight that refined my rationality”. I’m not sure, maybe these “rationality improvements” are sometimes illusions.
I also don’t get why the rationality stuff is intermixed with friendly AI and cryonics and transhumanism. I just don’t see why these belong that much together. I find them too speculative and detached from the “real world” to be the central ideas. I realize they are important, but their prevalence could also be explained as “escapism” and it promotes the discussion of untestable meta things that I mentioned above, never having to face reality. There is much talk about what evidence is but not much talk that actually presents evidence.
I needed to develop a sort of immunity against topics like acausal trade that I can’t fully specify how they are wrong, but they feel wrong and are hard to translate to practical testable statements, and it just messes with my head in the wrong way.
And of course there is also that secrecy around and hiding of “certain things”.
That’s it. This place may just not be for me, which is fine. People can have their communities in the way they want. You just asked for elaboration.
Thanks for the detailed response! I’ll respond to a handful of points:
I certainly agree that there are people here who match that description, but it’s also worth pointing out that there are actual experts too.
One of the things I find most charming about LW, compared to places like RationalWiki, is how much emphasis there is on self-improvement and your mistakes, not mistakes made by other people because they’re dumb.
I’m not sure this is avoidable, and in full irony I’ll link to the wiki page that explains why.
In general, there are lots of concepts that seem useful, but the only way we have to refer to concepts is either to refer to a label or to explain the concept. A number of people read through the sequences and say “but the conclusions are just common sense!”, to which the response is, “yes, but how easy is it to communicate common sense?” It’s one thing to be able to recognize that there’s some vague problem, and another thing to be able to say “the problem here is inferential distance; knowledge takes many steps to explain, and attempts to explain it in fewer steps simply won’t work, and the justification for this potentially surprising claim is in Appendix A.” It is one thing to be able to recognize a concept as worthwhile; it is another thing to be able to recreate that concept when a need arises.
Now, I agree with you that having different labels to refer to the same concept, or conceptual boundaries or definitions that are drawn slightly differently, is a giant pain. When possible, I try to bring the wider community’s terminology to LW, but this requires being in both communities, which limits how much any individual person can do.
Part of that is just seeding effects—if you start a rationality site with a bunch of people interested in transhumanism, the site will remain disproportionately linked to transhumanism because people who aren’t transhumanists will be more likely to leave and people who are transhumanists will be more likely to find and join the site.
Part of it is that those are the cluster of ideas that seem weird but ‘hold up’ under investigation—most of the reasons to believe that the economy of fifty years from now will look like the economy of today are just confused, and if a community has good tools for dissolving confusions you should expect them to converge on the un-confused answer.
A final part seems to be availability; people who are convinced by the case for cryonics tend to be louder than the people who are unconvinced. The annual surveys show the perception of LW one gets from just reading posts (or posts and comments) is skewed from the perception of LW one gets from the survey results.
I agree that LW is much better than RationalWiki, but I still think that the norms for discussion are much too far in the direction of focus on how other commenters are wrong as opposed to how one might oneself be wrong.
I know that there’s a selection effect (with respect to the more frustrating interactions standing out). But people not infrequently mistakenly believe that I’m wrong about things that I know much more about than they do, with very high confidence, and in such instances I find the connotations that I’m unsound to be exasperating.
I don’t think that this is just a problem for me rather than a problem for the community in general: I know a number of very high quality thinkers in real life who are uninterested in participating on LW explicitly because they don’t want to engage with commenters who are highly confident that their own positions are incorrect. There’s another selection effect here: such people aren’t salient because they’re invisible to the online community.
I agree that those frustrating interactions both happen and are frustrating, and that it leads to a general acidification of the discussion as people who don’t want to deal with it leave. Reversing that process in a sustainable way is probably the most valuable way to improve LW in the medium term.
There’s also the whole Lesswrong-is-dying thing that might be contribute to the vibe you’re getting. I’ve been reading the forum for years and it hasn’t felt very healthy for a while now. A lot of the impressive people from earlier have moved on, we don’t seem to be getting that many new impressive people coming in and hanging out a lot on the forum turns out not to make you that much more impressive. What’s left is turning increasingly into a weird sort of cargo cult of a forum for impressive people.
Actually, I think that LessWrong used to be worse when the “impressive people” were posting about cryonics, FAI, many-world interpretation of quantum mechanics, and so on.
It has seemed to me that a lot of the commenters who come with their own solid competency are also less likely to get unquestioningly swept away following EY’s particular hobbyhorses.
The applicable word is metaphysics. Acausal trade is dabbling in metaphysics to “solve” a question in decision theory, which is itself mere philosophizing, and thus one has to wonder: what does Nature care for philosophies?
By the way, for the rest of your post I was going, “OH MY GOD I KNOW YOUR FEELS, MAN!” So it’s not as though nobody ever thinks these things. Those of us who do just tend to, in perfect evaporative cooling fashion, go get on with our lives outside this website, being relatively ordinary science nerds.
Sorry avoiding metaphysics doesn’t work. You just end up either reinventing them (badly) or using a bad 5th hand version of some old philospher’s metaphysics. Incidentally, Eliezer also tried avoiding metaphysics and wound up doing the former.
I don’t like Eliezer’s apparent mathematical/computational Platonism myself, but most working scientists manage to avoid metaphysical buggery by simply dealing with only those things with which what they can actually causally interact. I recall an Eliezer post on “Explain/Worship/Ignore”, and would add myself that while “Explain” eventually bottoms out in the limits of our current knowledge, the correct response is to hit “Ignore” at that stage, not to drop to one’s knees in Worship of a Sacred Mystery that is in fact just a limit to current evidence.
EDIT: This is also one of the reasons I enjoy being in this community: even when I disagree with someone’s view (eg: Eliezer’s), people here (including him) are often more productive and fun to talk to than someone who hits the limits of their scientific knowledge and just throws their hands up to the tune of “METAPHYSICS, SON!”, and then joins the bloody Catholic Church, as if that solved anything.
That works up until the point where you actually have to think about what it means to “causally interact” with something. Also questions like “does something that falls into a black hole cease to exist since it’s no longer possible to interact with it”?
But there are trivially easy answers to questions like that. Basically you have to ask “Cease to exist for whom?” i.e. it obviously ceases to exist for you. You just have to taboo words like “really” here such “does it really cease to exist” as they are meaningless, they don’t lead to predictions. What often people consider “really” reality is the perception of a perfect god-like omniscient observer but there is no such thing.
Essentially there are just two extremes to avoid, the po-mo “nothing is real, everything is mere perception” and the traditional, classical “but how things really really REALLY are?” and the middle way here is “reality is the sum of what could be perceived in principle”. A perception is right or wrong based on how much it meshes with all the other things that can in principle be perceived. Everything that cannot even be perceived in theory is not part of reality. There is no how things “really” are, the closest we have to that what is the sum of all potential, possible perceivables about a thing.
I picked up this approach from Eric S. Raymond, I think he worked it out decades before Eliezer did, possibly both working from Peirce.
This is basically anti-metaphysics.
Does this imply that only things that exist in my past light cone are real for me at any given moment?
I don’t know what real-for-me means here. Everything that in principle, in theory, could be observed, is real. Most of those you didn’t. This does not make them any less real.
I meant the “for whom?” not in the sense of me, you, or the barkeeper down the street. I meant it in the sense of normal beings who know only things that are in principle knowable, vs. some godlike being who can know how things really “are” regardless of whether they are knowable or not.
Well, that’s where it starts to break down; because what you can, in theory, observe is different from what I can, in theory, observe.
This is because, as far as anyone can tell, observations are limited by the speed of light. I cannot, even in principle, observe the 2015 Alpha Centauri until at least 2019 (if I observe it now, I am seeing light that left it around 2011). If Alpha Centauri had suddenly exploded in 2013, I have no way of observing that until at least 2018 - even in principle.
So if the barkeeper, instead of being down the street, is rather living on a planet orbiting Alpha Centauri, then the set of what he can observe in principle is not the same as the set of what I can observe in principle.
I’d like to congratulate you on developing your own “makes you sound insane to the man in the street” theory of metaphysics.
Man on the street needs to learn what counterfactual definiteness is.
Ilya, can you give me a definition of “counterfactual definiteness” please?
Physicists are not very precise about it, may I suggest looking into “potential outcomes” (the language some statisticians use to talk about counterfactuals):
https://en.wikipedia.org/wiki/Rubin_causal_model
https://en.wikipedia.org/wiki/Counterfactual_definiteness
Potential outcomes let you think about a model that contains a random variable for what happens to Fred if we give Fred aspirin, and a random variable for what happens to Fred if we give Fred placebo. Even though in reality we only gave Fred aspirin. This is “counterfactual definiteness” in statistics.
This paper uses potential outcomes to talk about outcomes of physics experiments (so there is an exact isomorphism between counterfactuals in physics and potential outcomes):
http://arxiv.org/pdf/1207.4913.pdf
Sounds like this is perhaps related to the counterfactual-consistency statement? In its simple form, that the counterfactual or potential outcome under policy “a” equals the factual observed outcome when you in fact undertake policy “a”, or formally, Y^a = Y when A = a.
Pearl has a nice (easy) discussion in the journal Epidemiology (http://www.ncbi.nlm.nih.gov/pubmed/20864888).
Is this what you are getting at, or am I missing the point?
No, not quite. Counterfactual consistency is what allows you to link observed and hypothetical data (so it is also extremely important). Counterfactual definiteness is even more basic than that. It basically sets the size of your ontology by allowing you to talk about Y(a) and Y(a’) together, even if we only observe Y under one value of A.
edit: Stephen, I think I realized who you are, please accept my apologies if I seemed to be talking down to you, re: potential outcomes, that was not my intention. My prior is people do not know what potential outcomes are.
edit 2: Good talks by Richard Gill and Jamie Robins at JSM on this:
http://www.amstat.org/meetings/jsm/2015/onlineprogram/ActivityDetails.cfm?SessionID=211222
No offense taken. I am sorry I did not get to see Gill & Robins at JSM. Jamie also talks about some of these issues online back in 2013 at https://www.youtube.com/watch?v=rjcoJ0gC_po
Well, this whole thread started because minusdash and eli_sennesh objected to the concept of accusal trade for being too metaphysical.
I just need to translate that for him to street lingo.
“There is shit we know, shit we could know, and shit could not know no matter how good tech we had, we could not even know the effects it has on other stuff. So why should we say this later stuff exists? Or why should we say this does not exist? We cannot prove either.”
My serious point is that one cannot avoid metaphysics, and that way too many people start out from “all this metaphysics stuff is BS, I’ll just use common sense” and end up with there own (bad) counter-intuitive metaphysical theory that they insist is “not metaphysics”.
You could charitably understand everything that such people (who assert that metaphysics is BS) say with a silent “up to empirical equivalence”. Doesn’t the problem disappear then?
No because you need a theory of metaphysics to explain what “empirical equivalence” means.
To be honest, I don’t see that at all.
So how would you define “empirical equivalence”?
Its insufficiently appreciated that physicalism is metaphysics too.