I’m talking about an ethos, a culture, where people talk like they do in this story:
That is what I meant, too.
Now, as for why no one has done this already: well, besides the “why”, there is also the “who”, the “what”, the “where”, and the “when”. Who would have thought to try it before, and under what circumstances?
Some of those who read and believed the Class Project post 4 years ago.
To him and his colleagues at SI, the only really important problem is Friendly AI, and that (directly or indirectly) is what he’s been spending his time on
And, given that it takers only “five whole minutes to think an original thought”, how many thousands of original thoughts should he have come up with in 4 years? How many Einstein-style breakthroughs should he have made by now? How many has he?
Unless I misunderstand something, the Class Project post was a falsifiable model, and it has been falsified. Time to discard it?
At best this was a wish that it would be nice for it to be possible to train many humans to become much more effective at research than the most able humans currently are, maybe a kind of superpower story in rationality training (doing this by the kind of large margin implied in the story doesn’t seem particularly realistic to me, primarily because learning even very well-understood technical material takes a lot of time). It’s certainly not a suggestion that reading LW does the trick, or that it’s easy (or merely very hard) to develop the necessary training program.
[The idea that one intended interpretation was that EY himself is essentially a Beisutsukai of the story is so ridiculous that participating in this conversation feels like a distinctly low-status thing to do, with mostly the bewilderment at the persistence of your argument driving me to publish this comment...]
The falsifiable model of human behavior lurking beneath the fiction here was expounded in To Spread Science, Keep It Secret. Trying to refute that model using details in the work of fiction created to illustrate it isn’t sound.
EDIT: For what it’s worth, this is also the same failure mode anti-Randists fall into when they try to criticize Objectivism after reading Fountainhead and/or Atlas Shrugged. It’s actually much cleaner to construct a criticism from her non-fiction materials, but then one would have to tolerate her non-fiction...
The falsifiable model of human behavior lurking beneath the fiction here was expounded in To Spread Science, Keep It Secret. Trying to refute that model using details in the work of fiction created to illustrate it isn’t sound.
I don’t see anything there about the Bayesian way being much more productive than “Eld science”.
Some of those who read and believed the Class Project post 4 years ago.
I read the post when it appeared 4 years ago, and I don’t remember anyone saying “Hey, let’s set up a community for people who’ve read Overcoming Bias to research quantum gravity!”
How many Einstein-style breakthroughs should [EY] have made by now? How many has he?
I don’t really care to get into the usual argument about how much progress EY has made on FAI. As I’ve noted above, my own interests (for now) lie elsewhere.
Unless I misunderstand something, the Class Project post was a falsifiable model, and it has been falsified.
It was not intended as a prediction about his own research efforts over the next four years, as far as I know. Especially since his focus over that time has been on community-building rather than direct FAI research.
It was not intended as a prediction about his own research efforts over the next four years, as far as I know.
Yet it was, whether it was meant to or not. Surely he would be the first one to apply this marvelous approach?
Especially since his focus over that time has been on community-building rather than direct FAI research.
This is a rationalization, and you know it. He stated several times that he neglected SI to concentrate on research.
However, leaving the FAI research alone, I am rooting for your success. I certainly agree that a collaboration of like-minded people has a much better chance of success than any of them on their own, Bayes or no Bayes.
That is, I would like to see a subcommunity of LW devoted to researching mathematical and scientific problems independently of the current formal academic structure.
Well, being both outside the academia and not a complete novice in some fields of physics, I would love to get engaged in something like that, while learning the Bayesian way along the way. Whether there are others here in a similar position, I am not sure.
That is what I meant, too.
Some of those who read and believed the Class Project post 4 years ago.
And, given that it takers only “five whole minutes to think an original thought”, how many thousands of original thoughts should he have come up with in 4 years? How many Einstein-style breakthroughs should he have made by now? How many has he?
Unless I misunderstand something, the Class Project post was a falsifiable model, and it has been falsified. Time to discard it?
It’s a work of fiction, not a model.
komponisto appears to be treating it in this discussion as a model, and I would assume that’s the context shminux is speaking in.
How about this: it was a falsifiable model disguised as a work of fiction?
At best this was a wish that it would be nice for it to be possible to train many humans to become much more effective at research than the most able humans currently are, maybe a kind of superpower story in rationality training (doing this by the kind of large margin implied in the story doesn’t seem particularly realistic to me, primarily because learning even very well-understood technical material takes a lot of time). It’s certainly not a suggestion that reading LW does the trick, or that it’s easy (or merely very hard) to develop the necessary training program.
[The idea that one intended interpretation was that EY himself is essentially a Beisutsukai of the story is so ridiculous that participating in this conversation feels like a distinctly low-status thing to do, with mostly the bewilderment at the persistence of your argument driving me to publish this comment...]
The falsifiable model of human behavior lurking beneath the fiction here was expounded in To Spread Science, Keep It Secret. Trying to refute that model using details in the work of fiction created to illustrate it isn’t sound.
EDIT: For what it’s worth, this is also the same failure mode anti-Randists fall into when they try to criticize Objectivism after reading Fountainhead and/or Atlas Shrugged. It’s actually much cleaner to construct a criticism from her non-fiction materials, but then one would have to tolerate her non-fiction...
I don’t see anything there about the Bayesian way being much more productive than “Eld science”.
I read the post when it appeared 4 years ago, and I don’t remember anyone saying “Hey, let’s set up a community for people who’ve read Overcoming Bias to research quantum gravity!”
I don’t really care to get into the usual argument about how much progress EY has made on FAI. As I’ve noted above, my own interests (for now) lie elsewhere.
It was not intended as a prediction about his own research efforts over the next four years, as far as I know. Especially since his focus over that time has been on community-building rather than direct FAI research.
Yet it was, whether it was meant to or not. Surely he would be the first one to apply this marvelous approach?
This is a rationalization, and you know it. He stated several times that he neglected SI to concentrate on research.
However, leaving the FAI research alone, I am rooting for your success. I certainly agree that a collaboration of like-minded people has a much better chance of success than any of them on their own, Bayes or no Bayes.
Well, being both outside the academia and not a complete novice in some fields of physics, I would love to get engaged in something like that, while learning the Bayesian way along the way. Whether there are others here in a similar position, I am not sure.