If this blog’s “hard questions” have utility, they should be novel, important, and answerable.
Important questions are highly likely to be known already among experts in the relevant field. If they’re answerable, one of those experts is likely already working on it with more rigor than you’re capable of extracting from a crowd of anonymous bloggers. I think, then, that any questions you ask have a high probability of being redundant, unimportant, or unanswerable (at least to a useful degree of rigor). Unfortunately, you’re unlikely to know that in advance unless you vet the questions with experts in the relevant literature.
And at that point, you’re starting to look like an unaccountable, opaque, disorganized, and underresourced anonymously peer-reviewed journal.
It might be interesting to explore the possibility that a wiki-written or amateur-sourced peer reviewed journal could have some utility, especially if it focused on a topic that is not so dependent on the expensive and often opaque process of gathering empirical data. I expect that anyone who can advance the field of mathematics is probably already a PhD mathematician. So philosophy, decision theory, something like that?
Developing a process to help an anonymous crowd of blog enthusiasts turn their labor into a respectable product would be useful and motivating. I would start by making your next “hard question” what specific topic such a PRJ could usefully focus on.
Your premises seem strange to me – questions are either important and already worked on, or not important? Already-worked-on-questions don’t need answers? Both of these seem false.
If an expert somewhere knows the answer to something, I still often need to know the answer myself (because it’s a piece of a broader puzzle that I care about, which the expert doesn’t necessarily care about). I still need someone to go find the answer, distill it, and to help put it into a new context.
The LW community historically has tackled questions that were important, and that few other people were working on (in particular related to human rationality, AI alignment and effective altruism)
If this blog’s “hard questions” have utility, they should be novel, important, and answerable.
Important questions are highly likely to be known already among experts in the relevant field. If they’re answerable, one of those experts is likely already working on it with more rigor than you’re capable of extracting from a crowd of anonymous bloggers. I think, then, that any questions you ask have a high probability of being redundant, unimportant, or unanswerable (at least to a useful degree of rigor). Unfortunately, you’re unlikely to know that in advance unless you vet the questions with experts in the relevant literature.
And at that point, you’re starting to look like an unaccountable, opaque, disorganized, and underresourced anonymously peer-reviewed journal.
It might be interesting to explore the possibility that a wiki-written or amateur-sourced peer reviewed journal could have some utility, especially if it focused on a topic that is not so dependent on the expensive and often opaque process of gathering empirical data. I expect that anyone who can advance the field of mathematics is probably already a PhD mathematician. So philosophy, decision theory, something like that?
Developing a process to help an anonymous crowd of blog enthusiasts turn their labor into a respectable product would be useful and motivating. I would start by making your next “hard question” what specific topic such a PRJ could usefully focus on.
Your premises seem strange to me – questions are either important and already worked on, or not important? Already-worked-on-questions don’t need answers? Both of these seem false.
If an expert somewhere knows the answer to something, I still often need to know the answer myself (because it’s a piece of a broader puzzle that I care about, which the expert doesn’t necessarily care about). I still need someone to go find the answer, distill it, and to help put it into a new context.
The LW community historically has tackled questions that were important, and that few other people were working on (in particular related to human rationality, AI alignment and effective altruism)