At best this was a wish that it would be nice for it to be possible to train many humans to become much more effective at research than the most able humans currently are, maybe a kind of superpower story in rationality training (doing this by the kind of large margin implied in the story doesn’t seem particularly realistic to me, primarily because learning even very well-understood technical material takes a lot of time). It’s certainly not a suggestion that reading LW does the trick, or that it’s easy (or merely very hard) to develop the necessary training program.
[The idea that one intended interpretation was that EY himself is essentially a Beisutsukai of the story is so ridiculous that participating in this conversation feels like a distinctly low-status thing to do, with mostly the bewilderment at the persistence of your argument driving me to publish this comment...]
The falsifiable model of human behavior lurking beneath the fiction here was expounded in To Spread Science, Keep It Secret. Trying to refute that model using details in the work of fiction created to illustrate it isn’t sound.
EDIT: For what it’s worth, this is also the same failure mode anti-Randists fall into when they try to criticize Objectivism after reading Fountainhead and/or Atlas Shrugged. It’s actually much cleaner to construct a criticism from her non-fiction materials, but then one would have to tolerate her non-fiction...
The falsifiable model of human behavior lurking beneath the fiction here was expounded in To Spread Science, Keep It Secret. Trying to refute that model using details in the work of fiction created to illustrate it isn’t sound.
I don’t see anything there about the Bayesian way being much more productive than “Eld science”.
It’s a work of fiction, not a model.
komponisto appears to be treating it in this discussion as a model, and I would assume that’s the context shminux is speaking in.
How about this: it was a falsifiable model disguised as a work of fiction?
At best this was a wish that it would be nice for it to be possible to train many humans to become much more effective at research than the most able humans currently are, maybe a kind of superpower story in rationality training (doing this by the kind of large margin implied in the story doesn’t seem particularly realistic to me, primarily because learning even very well-understood technical material takes a lot of time). It’s certainly not a suggestion that reading LW does the trick, or that it’s easy (or merely very hard) to develop the necessary training program.
[The idea that one intended interpretation was that EY himself is essentially a Beisutsukai of the story is so ridiculous that participating in this conversation feels like a distinctly low-status thing to do, with mostly the bewilderment at the persistence of your argument driving me to publish this comment...]
The falsifiable model of human behavior lurking beneath the fiction here was expounded in To Spread Science, Keep It Secret. Trying to refute that model using details in the work of fiction created to illustrate it isn’t sound.
EDIT: For what it’s worth, this is also the same failure mode anti-Randists fall into when they try to criticize Objectivism after reading Fountainhead and/or Atlas Shrugged. It’s actually much cleaner to construct a criticism from her non-fiction materials, but then one would have to tolerate her non-fiction...
I don’t see anything there about the Bayesian way being much more productive than “Eld science”.