That is, I would like to see a subcommunity of LW devoted to researching mathematical and scientific problems independently of the current formal academic structure. Indeed, this already exists for decision theory; I would like to see it extended to other mathematical topics as well. I would even like to do this project.
I am convinced that there is a lot more fruit hanging a lot lower than people realize, in pretty much every field. Yes, even in string theory/quantum gravity/mathematical physics. The negative epistemic effects of existing social structures (aka Eld Science), as well as simple cognitive biases, really are that bad.
It may be helpful in this connection to remember Quirrell’s Law:
The world around us redounds with opportunities, explodes with opportunities, which nearly all folk ignore because it would require them to violate a habit of thought.
Moreover, I have to admit that I’m just curious as hell about some of these topics, and about the level of progress that could be achieved via systematic, LW-inspired/trained effort.
So who’s interested in building a rationalist subcommunity for mathematical and scientific research? Zack Davis? Any of the decision theory people? Does anyone else feel as I do?
That is, I would like to see a subcommunity of LW devoted to researching mathematical and scientific problems independently of the current formal academic structure. Indeed, this already exists for decision theory; I would like to see it extended to other mathematical topics as well.
That would be really nice. There have been some attempts, but all previous attempts (that I’m aware of) have more or less failed.
I would even like to do this project.
I’m far less optimistic about this.
So who’s interested in building a rationalist subcommunity for mathematical and scientific research? Zack Davis? Any of the decision theory people? Does anyone else feel as I do?
I was thinking about this during the Falcon 9/Dragon launch, and the lowest-hanging fruit in PDEs probably involves systematizing the wealth of inequalities and quantitative results regarding solutions of PDEs. It probably wouldn’t be very flashy, though—there are a lot of extremely technical results that are only well-understood by a handful of people.
There was an attempt headed by Tao (of course) and others, called DispersiveWiki, but it fell to a spam attack a couple months ago. I’m not sure if it’s been rebooted.
There was an attempt headed by Tao (of course) and others, called DispersiveWiki, but it fell to a spam attack a couple months ago. I’m not sure if it’s been rebooted
there is a lot more fruit hanging a lot lower than people realize, in pretty much every field.
So… why didn’t EY pluck any yet? Or any of the forum regulars?
EDIT: It’s posts like this that make me think about the potential for phygishness. Four years after this post was first published there is still no experimental evidence that being armed with Bayesianism gives you any advantage in fundamental research, yet smart people like komponisto and paper-machine still disregard the absence of evidence.
This does raise a question about why no such experiment has been attempted. If one really thinks that one’s rationality skills are that much better and that there’s that much low hanging fruit then this is an obvious thing to try.
My gut feel is that Bayesianism is one of the many useful tools in research, but it is no substitute for education, experience, creativity, patience or tenacity. I am not at all certain that it helps one identify the proverbial low-hanging fruit in any meaningful way. I would be very happy if komponisto or anyone else proved me wrong (experimentally, not argumentatively).
Agreed. There’s also a connected issue that Bayesianism just isn’t that much of a supersecret weapon- Bayesian techniques are used in many different fields. Maybe it would have been such fifty years ago, or even thirty years ago, but today Bayesian reasoning is common.
There seems to be a serious misunderstanding here. (The current voting patterns are completely out of whack with what I expected.) I seem to have run into some inferential distance that I didn’t realize existed. So let me try to be more detailed.
I would like to develop a social support structure for the-kind-of-people-who’ve-read-the-Sequences to pursue certain kinds of research outside of (existing) academia. Such a structure already exists, in the form of LW and SI, for some things (decision theory, and perhaps philosophy in general). I would like to see it extended to more things, including things that I happen to be interested in (but which aren’t necessarily considered immediately world-saving by the SI crowd).
(Notice that I mentioned both SI and LW in the previous paragraph. These are different kinds of entities, and I mentioned them both for a reason: to indicate how broad the notion of “social support structure” that I have in mind here is.)
I thought it was conventional wisdom around here that certain kinds of productive intellectual work are not properly incentivized by standard academia, and that the latter systematically fails to teach certain important intellectual skills. This is, after all, kind of the whole point of the MWI sequence!
Frankly, I expected it to be obvious that we’re not talking about anything as mundane as knowledge of Bayesian probability theory as a mathematical topic. Of course that isn’t a secret, and “everyone” in standard science knows it. I’m talking about an ethos, a culture, where people talk like they do in this story:
“Too slow! If Einstein were in this classroom now, rather than Earth of the negative first century, I would rap his knuckles! You will not try to do as well as Einstein! You will aspire to do BETTER than Einstein or you may as well not bother!”
“Assume, Brennan, that it takes five whole minutes to think an original thought, rather than learning it from someone else. Does even a major scientific problem require 5760 distinct insights?”
There is a difference, as LW readers well know, between understanding Bayesian probability theory as a mathematical tool, and “getting” the ethos of x-rationality.
No one is talking about “applying” some kind of Bayesian statistical method to an unsolved problem and hoping to magically get the right answer. Explicit probability theory need not enter into it at all. The thing that would be “applied” is the LW culture—where you’re actually allowed to try to understand things.
This is not intended as a rebellious status-grab. Let me repeat that: this is not a status-grab. For now, it is simply a fun project to work on. I am not laying claim to a magical aura of destiny. (As a matter of fact, the very idea that you have a certain amount of status before you’re allowed to work on important problems is itself one of the pathological assumptions of Traditional Science that the LW culture is specifically set up to avoid.)
Now, as for why no one has done this already: well, besides the “why”, there is also the “who”, the “what”, the “where”, and the “when”. Who would have thought to try it before, and under what circumstances? As far as I know, EY intended this story as a parable, not as a concrete plan of action. To him and his colleagues at SI, the only really important problem is Friendly AI, and that (directly or indirectly) is what he’s been spending his time on; other forms of mathematical and scientific research are mostly viewed as shiny distractions that tempt smart people away from their real duty, which is to save the universe. (Yes, this is a caricature, but it’s true-as-a-caricature.) I take a somewhat different view, which may be due to a slightly different utility function, but in any case—I think there is much to be gained by exploring these alternative paths.
This is, after all, kind of the whole point of the MWI sequence!
It does not demonstrate this point, because it simply substitutes one kind of incompetence for another, and it also creates a myth or two about what the intellectual situation in quantum physics actually is.
The myth is that physicists believe in collapse of the wavefunction as a physical process caused by observation, rather than in many worlds, and that this is their main intellectual error. In fact, the central error of the subject is still just instrumentalist or pragmatist or anti-realist complacency, which says the quantum formalism works, we can apply it to new situations as required, what more does a theory need? You have to understand that the empirical content of quantum mechanics is found in the expectation values of operators, the eigenvalues of wavefunctions, etc, not in saying whether wavefunctions are real or are just devices for calculation, and not in saying whether they collapse or not. Many worlds only becomes a well-defined alternative to the instrumentalist attitude, in the way that Bohmian mechanics is a well-defined alternative, when it offers an objectively specified account of why the empirically relevant part of quantum mechanics works. Otherwise, this debate is just a game of “my god is better than your god”.
Remember the part of the sequence which says “where does the Born rule for probabilities come from?” And notice that, in answering this absolutely fundamental question, Eliezer doesn’t draw upon some standard idea, instead he highlights an almost unknown speculation by his pal the economist who used to be a physicist. He has to resort to this because professional physicists in the post-quantum era are bad at realism, even the many-worlds advocates. Most of the latter think that handwaving about decoherence explains everything; they have retained some bad and confused standards of explanation from their positivist predecessors, even while they reject the pragmatist philosophy.
Meanwhile, the sequences have created a small population of nonphysicists who think they know what the correct ontology of quantum mechanics is, even though they lack vital concepts like operators and observables, and have never even heard of alternatives like retrocausal interpretations or ’t Hooft’s holographic determinism. You know how there are all these people who, having heard of the problem of friendly AI, think they have a quick fix, whereas the SI perspective is that it is a dizzyingly hard problem full of pitfalls, and the real solution requires all sorts of unnatural-sounding efforts and methods? That’s my attitude to the problem of quantum ontology. And I regard what you see on this site, about this issue, as enthusiastic, but uninformed, and very naive, dilettantism.
I’m talking about an ethos, a culture, where people talk like they do in this story:
That is what I meant, too.
Now, as for why no one has done this already: well, besides the “why”, there is also the “who”, the “what”, the “where”, and the “when”. Who would have thought to try it before, and under what circumstances?
Some of those who read and believed the Class Project post 4 years ago.
To him and his colleagues at SI, the only really important problem is Friendly AI, and that (directly or indirectly) is what he’s been spending his time on
And, given that it takers only “five whole minutes to think an original thought”, how many thousands of original thoughts should he have come up with in 4 years? How many Einstein-style breakthroughs should he have made by now? How many has he?
Unless I misunderstand something, the Class Project post was a falsifiable model, and it has been falsified. Time to discard it?
At best this was a wish that it would be nice for it to be possible to train many humans to become much more effective at research than the most able humans currently are, maybe a kind of superpower story in rationality training (doing this by the kind of large margin implied in the story doesn’t seem particularly realistic to me, primarily because learning even very well-understood technical material takes a lot of time). It’s certainly not a suggestion that reading LW does the trick, or that it’s easy (or merely very hard) to develop the necessary training program.
[The idea that one intended interpretation was that EY himself is essentially a Beisutsukai of the story is so ridiculous that participating in this conversation feels like a distinctly low-status thing to do, with mostly the bewilderment at the persistence of your argument driving me to publish this comment...]
The falsifiable model of human behavior lurking beneath the fiction here was expounded in To Spread Science, Keep It Secret. Trying to refute that model using details in the work of fiction created to illustrate it isn’t sound.
EDIT: For what it’s worth, this is also the same failure mode anti-Randists fall into when they try to criticize Objectivism after reading Fountainhead and/or Atlas Shrugged. It’s actually much cleaner to construct a criticism from her non-fiction materials, but then one would have to tolerate her non-fiction...
The falsifiable model of human behavior lurking beneath the fiction here was expounded in To Spread Science, Keep It Secret. Trying to refute that model using details in the work of fiction created to illustrate it isn’t sound.
I don’t see anything there about the Bayesian way being much more productive than “Eld science”.
Some of those who read and believed the Class Project post 4 years ago.
I read the post when it appeared 4 years ago, and I don’t remember anyone saying “Hey, let’s set up a community for people who’ve read Overcoming Bias to research quantum gravity!”
How many Einstein-style breakthroughs should [EY] have made by now? How many has he?
I don’t really care to get into the usual argument about how much progress EY has made on FAI. As I’ve noted above, my own interests (for now) lie elsewhere.
Unless I misunderstand something, the Class Project post was a falsifiable model, and it has been falsified.
It was not intended as a prediction about his own research efforts over the next four years, as far as I know. Especially since his focus over that time has been on community-building rather than direct FAI research.
It was not intended as a prediction about his own research efforts over the next four years, as far as I know.
Yet it was, whether it was meant to or not. Surely he would be the first one to apply this marvelous approach?
Especially since his focus over that time has been on community-building rather than direct FAI research.
This is a rationalization, and you know it. He stated several times that he neglected SI to concentrate on research.
However, leaving the FAI research alone, I am rooting for your success. I certainly agree that a collaboration of like-minded people has a much better chance of success than any of them on their own, Bayes or no Bayes.
That is, I would like to see a subcommunity of LW devoted to researching mathematical and scientific problems independently of the current formal academic structure.
Well, being both outside the academia and not a complete novice in some fields of physics, I would love to get engaged in something like that, while learning the Bayesian way along the way. Whether there are others here in a similar position, I am not sure.
I would like to develop a social support structure for the-kind-of-people-who’ve-read-the-Sequences to pursue certain kinds of research outside of (existing) academia
One problem with this approach is that the existing academia has access to all kinds of useful lab equipment, up to and including the Large Hadron Collider. It would be very difficult for a group of enthusiasts to acquire that kind of equipment; and, without it, it’s hard to do any truly revolutionary research.
Presumably the focus would be on areas, such as mathematics, that don’t require expensive equipment. That’s certainly my own interest, anyway.
By the way, I should point out that, although these projects would themselves be outside the academic system, the people pursuing them don’t necessarily need to be.
How do you see the common LW background helping such a group, vs just a group of mathematicians with the same background and credentials, but without any exposure to the Sequences?
My gut feel is that Bayesianism is one of the many useful tools in research, but it is no substitute for education, experience, creativity, patience or tenacity.
This is like saying that “intelligence is no match for a gun”—as if guns grew on trees (to use an EY-ism). Your idea of “Bayesianism” is far too narrow, as if it meant a specific tool (in fact you refer to it as such), rather than a way of life, which is closer to what it means in the context of LW and EY’s Brennan universe. Instead of “Bayesianism”, perhaps you should substitute “the rationality culture promoted by LW”.
If you do it right, you will of course make use of education, experience, creativity, patience and tenacity; and more.
The best part would be keeping the results secret (with independent verification). I expect it would make many people interested in lesswrong. The controversy alone due to conflicts with typical academic values of open access would be great PR.
I would really like this to actually exist.
That is, I would like to see a subcommunity of LW devoted to researching mathematical and scientific problems independently of the current formal academic structure. Indeed, this already exists for decision theory; I would like to see it extended to other mathematical topics as well. I would even like to do this project.
I am convinced that there is a lot more fruit hanging a lot lower than people realize, in pretty much every field. Yes, even in string theory/quantum gravity/mathematical physics. The negative epistemic effects of existing social structures (aka Eld Science), as well as simple cognitive biases, really are that bad.
It may be helpful in this connection to remember Quirrell’s Law:
Moreover, I have to admit that I’m just curious as hell about some of these topics, and about the level of progress that could be achieved via systematic, LW-inspired/trained effort.
So who’s interested in building a rationalist subcommunity for mathematical and scientific research? Zack Davis? Any of the decision theory people? Does anyone else feel as I do?
That would be really nice. There have been some attempts, but all previous attempts (that I’m aware of) have more or less failed.
I’m far less optimistic about this.
I would be interested, and I feel this way.
I was thinking about this during the Falcon 9/Dragon launch, and the lowest-hanging fruit in PDEs probably involves systematizing the wealth of inequalities and quantitative results regarding solutions of PDEs. It probably wouldn’t be very flashy, though—there are a lot of extremely technical results that are only well-understood by a handful of people.
There was an attempt headed by Tao (of course) and others, called DispersiveWiki, but it fell to a spam attack a couple months ago. I’m not sure if it’s been rebooted.
It seems to be working at the moment..
I think the wiki format would be very useful for work of this kind; and I agree that systematizing PDE results seems a promising approach.
What were the previous attempts you are thinking of?
Could someone find us some problems to work on that require relatively little advanced mathematics vocabulary to understand?
So… why didn’t EY pluck any yet? Or any of the forum regulars?
EDIT: It’s posts like this that make me think about the potential for phygishness. Four years after this post was first published there is still no experimental evidence that being armed with Bayesianism gives you any advantage in fundamental research, yet smart people like komponisto and paper-machine still disregard the absence of evidence.
I’ll get the cloaks. You get the volcano lair.
They haven’t yet tried.
“No experimental evidence” is a hollow cry before any experiments have been conducted.
This does raise a question about why no such experiment has been attempted. If one really thinks that one’s rationality skills are that much better and that there’s that much low hanging fruit then this is an obvious thing to try.
My gut feel is that Bayesianism is one of the many useful tools in research, but it is no substitute for education, experience, creativity, patience or tenacity. I am not at all certain that it helps one identify the proverbial low-hanging fruit in any meaningful way. I would be very happy if komponisto or anyone else proved me wrong (experimentally, not argumentatively).
Agreed. There’s also a connected issue that Bayesianism just isn’t that much of a supersecret weapon- Bayesian techniques are used in many different fields. Maybe it would have been such fifty years ago, or even thirty years ago, but today Bayesian reasoning is common.
There seems to be a serious misunderstanding here. (The current voting patterns are completely out of whack with what I expected.) I seem to have run into some inferential distance that I didn’t realize existed. So let me try to be more detailed.
I would like to develop a social support structure for the-kind-of-people-who’ve-read-the-Sequences to pursue certain kinds of research outside of (existing) academia. Such a structure already exists, in the form of LW and SI, for some things (decision theory, and perhaps philosophy in general). I would like to see it extended to more things, including things that I happen to be interested in (but which aren’t necessarily considered immediately world-saving by the SI crowd).
(Notice that I mentioned both SI and LW in the previous paragraph. These are different kinds of entities, and I mentioned them both for a reason: to indicate how broad the notion of “social support structure” that I have in mind here is.)
I thought it was conventional wisdom around here that certain kinds of productive intellectual work are not properly incentivized by standard academia, and that the latter systematically fails to teach certain important intellectual skills. This is, after all, kind of the whole point of the MWI sequence!
Frankly, I expected it to be obvious that we’re not talking about anything as mundane as knowledge of Bayesian probability theory as a mathematical topic. Of course that isn’t a secret, and “everyone” in standard science knows it. I’m talking about an ethos, a culture, where people talk like they do in this story:
There is a difference, as LW readers well know, between understanding Bayesian probability theory as a mathematical tool, and “getting” the ethos of x-rationality.
No one is talking about “applying” some kind of Bayesian statistical method to an unsolved problem and hoping to magically get the right answer. Explicit probability theory need not enter into it at all. The thing that would be “applied” is the LW culture—where you’re actually allowed to try to understand things.
This is not intended as a rebellious status-grab. Let me repeat that: this is not a status-grab. For now, it is simply a fun project to work on. I am not laying claim to a magical aura of destiny. (As a matter of fact, the very idea that you have a certain amount of status before you’re allowed to work on important problems is itself one of the pathological assumptions of Traditional Science that the LW culture is specifically set up to avoid.)
Now, as for why no one has done this already: well, besides the “why”, there is also the “who”, the “what”, the “where”, and the “when”. Who would have thought to try it before, and under what circumstances? As far as I know, EY intended this story as a parable, not as a concrete plan of action. To him and his colleagues at SI, the only really important problem is Friendly AI, and that (directly or indirectly) is what he’s been spending his time on; other forms of mathematical and scientific research are mostly viewed as shiny distractions that tempt smart people away from their real duty, which is to save the universe. (Yes, this is a caricature, but it’s true-as-a-caricature.) I take a somewhat different view, which may be due to a slightly different utility function, but in any case—I think there is much to be gained by exploring these alternative paths.
It does not demonstrate this point, because it simply substitutes one kind of incompetence for another, and it also creates a myth or two about what the intellectual situation in quantum physics actually is.
The myth is that physicists believe in collapse of the wavefunction as a physical process caused by observation, rather than in many worlds, and that this is their main intellectual error. In fact, the central error of the subject is still just instrumentalist or pragmatist or anti-realist complacency, which says the quantum formalism works, we can apply it to new situations as required, what more does a theory need? You have to understand that the empirical content of quantum mechanics is found in the expectation values of operators, the eigenvalues of wavefunctions, etc, not in saying whether wavefunctions are real or are just devices for calculation, and not in saying whether they collapse or not. Many worlds only becomes a well-defined alternative to the instrumentalist attitude, in the way that Bohmian mechanics is a well-defined alternative, when it offers an objectively specified account of why the empirically relevant part of quantum mechanics works. Otherwise, this debate is just a game of “my god is better than your god”.
Remember the part of the sequence which says “where does the Born rule for probabilities come from?” And notice that, in answering this absolutely fundamental question, Eliezer doesn’t draw upon some standard idea, instead he highlights an almost unknown speculation by his pal the economist who used to be a physicist. He has to resort to this because professional physicists in the post-quantum era are bad at realism, even the many-worlds advocates. Most of the latter think that handwaving about decoherence explains everything; they have retained some bad and confused standards of explanation from their positivist predecessors, even while they reject the pragmatist philosophy.
Meanwhile, the sequences have created a small population of nonphysicists who think they know what the correct ontology of quantum mechanics is, even though they lack vital concepts like operators and observables, and have never even heard of alternatives like retrocausal interpretations or ’t Hooft’s holographic determinism. You know how there are all these people who, having heard of the problem of friendly AI, think they have a quick fix, whereas the SI perspective is that it is a dizzyingly hard problem full of pitfalls, and the real solution requires all sorts of unnatural-sounding efforts and methods? That’s my attitude to the problem of quantum ontology. And I regard what you see on this site, about this issue, as enthusiastic, but uninformed, and very naive, dilettantism.
That is what I meant, too.
Some of those who read and believed the Class Project post 4 years ago.
And, given that it takers only “five whole minutes to think an original thought”, how many thousands of original thoughts should he have come up with in 4 years? How many Einstein-style breakthroughs should he have made by now? How many has he?
Unless I misunderstand something, the Class Project post was a falsifiable model, and it has been falsified. Time to discard it?
It’s a work of fiction, not a model.
komponisto appears to be treating it in this discussion as a model, and I would assume that’s the context shminux is speaking in.
How about this: it was a falsifiable model disguised as a work of fiction?
At best this was a wish that it would be nice for it to be possible to train many humans to become much more effective at research than the most able humans currently are, maybe a kind of superpower story in rationality training (doing this by the kind of large margin implied in the story doesn’t seem particularly realistic to me, primarily because learning even very well-understood technical material takes a lot of time). It’s certainly not a suggestion that reading LW does the trick, or that it’s easy (or merely very hard) to develop the necessary training program.
[The idea that one intended interpretation was that EY himself is essentially a Beisutsukai of the story is so ridiculous that participating in this conversation feels like a distinctly low-status thing to do, with mostly the bewilderment at the persistence of your argument driving me to publish this comment...]
The falsifiable model of human behavior lurking beneath the fiction here was expounded in To Spread Science, Keep It Secret. Trying to refute that model using details in the work of fiction created to illustrate it isn’t sound.
EDIT: For what it’s worth, this is also the same failure mode anti-Randists fall into when they try to criticize Objectivism after reading Fountainhead and/or Atlas Shrugged. It’s actually much cleaner to construct a criticism from her non-fiction materials, but then one would have to tolerate her non-fiction...
I don’t see anything there about the Bayesian way being much more productive than “Eld science”.
I read the post when it appeared 4 years ago, and I don’t remember anyone saying “Hey, let’s set up a community for people who’ve read Overcoming Bias to research quantum gravity!”
I don’t really care to get into the usual argument about how much progress EY has made on FAI. As I’ve noted above, my own interests (for now) lie elsewhere.
It was not intended as a prediction about his own research efforts over the next four years, as far as I know. Especially since his focus over that time has been on community-building rather than direct FAI research.
Yet it was, whether it was meant to or not. Surely he would be the first one to apply this marvelous approach?
This is a rationalization, and you know it. He stated several times that he neglected SI to concentrate on research.
However, leaving the FAI research alone, I am rooting for your success. I certainly agree that a collaboration of like-minded people has a much better chance of success than any of them on their own, Bayes or no Bayes.
Well, being both outside the academia and not a complete novice in some fields of physics, I would love to get engaged in something like that, while learning the Bayesian way along the way. Whether there are others here in a similar position, I am not sure.
One problem with this approach is that the existing academia has access to all kinds of useful lab equipment, up to and including the Large Hadron Collider. It would be very difficult for a group of enthusiasts to acquire that kind of equipment; and, without it, it’s hard to do any truly revolutionary research.
Presumably the focus would be on areas, such as mathematics, that don’t require expensive equipment. That’s certainly my own interest, anyway.
By the way, I should point out that, although these projects would themselves be outside the academic system, the people pursuing them don’t necessarily need to be.
How do you see the common LW background helping such a group, vs just a group of mathematicians with the same background and credentials, but without any exposure to the Sequences?
This is like saying that “intelligence is no match for a gun”—as if guns grew on trees (to use an EY-ism). Your idea of “Bayesianism” is far too narrow, as if it meant a specific tool (in fact you refer to it as such), rather than a way of life, which is closer to what it means in the context of LW and EY’s Brennan universe. Instead of “Bayesianism”, perhaps you should substitute “the rationality culture promoted by LW”.
If you do it right, you will of course make use of education, experience, creativity, patience and tenacity; and more.
What kinds of math problems are you interested in?
The best part would be keeping the results secret (with independent verification). I expect it would make many people interested in lesswrong. The controversy alone due to conflicts with typical academic values of open access would be great PR.