People focus their research on the areas that are interesting to them and this doesn’t seem to be.
Well, yes, obviously, but just as obvious is the question: why? Why isn’t anyone interested in this, when it sure seems like it should be extremely important? (Or do you disagree? Should we not expect that the epistemic status of the work that we still base much of our reasoning on should be of great interest?)
As for the grant thing—fair point, that could make it worth someone’s time to do. Although the sort of money being offered seems rather low for the value I, at least, would naively expect this community to place on work like this. (Which is no slight against the grantmakers—after all, it’s not like they had this particular project in mind. I am only saying that the grants in question don’t quite measure up to what seems to me like the value of a project like this, and thus aren’t quite an ideal match for it.)
Although the sort of money being offered seems rather low for the value I, at least, would naively expect this community to place on work like this.
If there would be someone who’s clearly capable of doing a good job at the task, I would expect that the project could find reasonable funding.
Well, yes, obviously, but just as obvious is the question: why? Why isn’t anyone interested in this, when it sure seems like it should be extremely important? (Or do you disagree? Should we not expect that the epistemic status of the work that we still base much of our reasoning on should be of great interest?)
I don’t think I or most of the rationality community bases most of our reasoning on knowledge that we believe because someone made a claim in an academic paper.
Take Julia Galef’s recent work with the “Scout mindset”. It basically about the thesis that whether or not those cognitive biases exist or not, teaching people about those won’t make them more rational when they stay in “Soldier mindset”.
There’s CFAR which build their curriculum by iterating a lot and looking at the effects of what they were doing and not primarily by believing that the academic knowledge is trustworthy. They used papers for inspiration but they tested whether ideas actually work in practice in the workshop context.
Over at the Good Judgement Project Tetlock didn’t find that what makes good superforcasters is their knowledge of logical fallacies or cognitive biases but a series of other heuristics.
There’s a sense that fake frameworks are okay, so if some of what’s in the Sequences is a fake framework, that’s not inherently problematic. When doing research it generally is good to have a theory of change and then focus on what’s required.
It seems awfully convenient that Eliezer made all these claims, in the Sequences, that were definitely and unquestionably factual, and based on empirical findings; that he based his conclusions on them; that he described many of these claims as surprising, such that they shifted his views, and ought to shift ours (that is, the reader’s)… but then, when many (but how many? we don’t know, and haven’t checked) of the findings in question failed to replicate, now we decide that it’s okay if they’re “fake frameworks”.
Does it not seem to you like this is precisely the sort of attitude toward the truth that the Sequences go to heroic lengths to warn against?
(As for CFAR, that seems to me to be a rather poor example. As far as I’m aware, CFAR has never empirically validated their techniques in any serious way, and indeed stopped trying to do so a long time ago, after initial attempts at such validation failed.)
CFAR generally taught classes and then followed up with people. They didn’t do that in a scientifically rigorous manner and had no interest in collaborating with academics like Falk Lieder to run a rigorous inquiry but that doesn’t mean that their approach wasn’t empiric. There were plenty of classes that they had in the begining where they learned after doing them and looking at empiric feedback that they weren’t a bad idea.
Does it not seem to you like this is precisely the sort of attitude toward the truth that the Sequences go to heroic lengths to warn against?
You might argue that the view of the sequences is opposed to the one that’s expressed in “fake frameworks” but it still seems to me one that’s popular right now.
I don’t deny that there would be some value in someone going through and fact-checking all the sequences but at the same time I understand why that’s nobodies Hemming problem.
Well, yes, obviously, but just as obvious is the question: why? Why isn’t anyone interested in this, when it sure seems like it should be extremely important? (Or do you disagree? Should we not expect that the epistemic status of the work that we still base much of our reasoning on should be of great interest?)
As for the grant thing—fair point, that could make it worth someone’s time to do. Although the sort of money being offered seems rather low for the value I, at least, would naively expect this community to place on work like this. (Which is no slight against the grantmakers—after all, it’s not like they had this particular project in mind. I am only saying that the grants in question don’t quite measure up to what seems to me like the value of a project like this, and thus aren’t quite an ideal match for it.)
If there would be someone who’s clearly capable of doing a good job at the task, I would expect that the project could find reasonable funding.
I don’t think I or most of the rationality community bases most of our reasoning on knowledge that we believe because someone made a claim in an academic paper.
Take Julia Galef’s recent work with the “Scout mindset”. It basically about the thesis that whether or not those cognitive biases exist or not, teaching people about those won’t make them more rational when they stay in “Soldier mindset”.
There’s CFAR which build their curriculum by iterating a lot and looking at the effects of what they were doing and not primarily by believing that the academic knowledge is trustworthy. They used papers for inspiration but they tested whether ideas actually work in practice in the workshop context.
Over at the Good Judgement Project Tetlock didn’t find that what makes good superforcasters is their knowledge of logical fallacies or cognitive biases but a series of other heuristics.
There’s a sense that fake frameworks are okay, so if some of what’s in the Sequences is a fake framework, that’s not inherently problematic. When doing research it generally is good to have a theory of change and then focus on what’s required.
It seems awfully convenient that Eliezer made all these claims, in the Sequences, that were definitely and unquestionably factual, and based on empirical findings; that he based his conclusions on them; that he described many of these claims as surprising, such that they shifted his views, and ought to shift ours (that is, the reader’s)… but then, when many (but how many? we don’t know, and haven’t checked) of the findings in question failed to replicate, now we decide that it’s okay if they’re “fake frameworks”.
Does it not seem to you like this is precisely the sort of attitude toward the truth that the Sequences go to heroic lengths to warn against?
(As for CFAR, that seems to me to be a rather poor example. As far as I’m aware, CFAR has never empirically validated their techniques in any serious way, and indeed stopped trying to do so a long time ago, after initial attempts at such validation failed.)
CFAR generally taught classes and then followed up with people. They didn’t do that in a scientifically rigorous manner and had no interest in collaborating with academics like Falk Lieder to run a rigorous inquiry but that doesn’t mean that their approach wasn’t empiric. There were plenty of classes that they had in the begining where they learned after doing them and looking at empiric feedback that they weren’t a bad idea.
You might argue that the view of the sequences is opposed to the one that’s expressed in “fake frameworks” but it still seems to me one that’s popular right now.
I don’t deny that there would be some value in someone going through and fact-checking all the sequences but at the same time I understand why that’s nobodies Hemming problem.
Hemming problem?
I mispelled it and it should be “Hamming problem”. See https://www.lesswrong.com/posts/P5k3PGzebd5yYrYqd/the-hamming-question