Heilmeier’s Catechism, a set of questions credited to George H. Heilmeier that anyone proposing a research project or product development effort should be able to answer.
Anyone applying for grant money. Anyone working within either the academic research community or the industrial research community or the government research community.
Gentleman scientists working on their own time and money in their ancestral manors are still free to do basic research.
You can take them as a calibration exercise. “I don’t know” or “Between a week and five centuries” are answers, and the point of asking the question is that some due diligence is likely to yield a better (more discriminating) answer.
Someone who had to pick one of two “basic research problems” to fund, under constraints of finite resources, would need estimates. They can also provide some guidance to answer “How long do we stick with this before going to Plan B?”
These questions are for public proposals, not for someone considering a project by themselves. If you’re building a collider or wish to play with someone else’s collider, you’d better know how much it will cost and how long you’ll take.
Interesting, but some of the questions aren’t easy to answer.
For example if you were asking the question to someone involved in early contraception development, do you think they could of predicted what demographic/birth rate changes it would have? Similarly someone inventing a better general machine learning technique, (useful for surveillance to robot butlers) could they enumerate the variety of ways it would change the world?
For AI projects, even weak ones, I would ask how they planned to avoid the puppet problem.
The point of such “catechisms” isn’t so much to have all the answers, rather to ensure that you have divided your attention evenly among a reasonable set of questions at the outset, in an effort to avoid “motivated cognition”—focusing on the thinking you find easy or pleasant to do, as opposed to the thinking that’s necessary.
The idea is to improve at predicting your predictable failures. If this kind of thinking turns up a thorny question you don’t know how to answer, you can lay the current project aside until you have solved the thorny question, as a matter of prudent dependency management.
A related example is surgery checklists. They work (see Atul Gawande’s Better). Surgeons hate them—their motivated cognition focuses on the technically juicy bits of surgery, they feel that trivia such as checking which side limb they’re operating on is beneath them.
I’m a big believer in surgery checklists. However I’m yet to be convinced that the catechisms will be beneficial unaltered to any research project.
A lot of science is about doing experiments that we don’t know the outcomes of and serendipitously discover things. Two examples that spring to mind are superconductivity and fullerene production.
If you asked each of the discoverers to justify their research by the catechisms you probably would have got no where near the actual results. This potential for serendipity should be built into the catechisms in some way. That is the answer “For Science!” has to hold some weight, even if it is less weight than is currently ascribed to it.
Yep. IOW the catechism can be used to discriminate between “fundamental” science, so-called, and applied engineering projects.
There’s a (subtle, perhaps) difference between advocating catechisms or checklists normatively (“this is a useful standard to compare yourself to”) and prescriptively (“do it this way or do it elsehwere”). To put yet another domain on the table, inability to draw the distinction plagues the project management professional community. “Methodologies” or “processes” are too often, and inappropriately, seen as edicts rather than sources of good ideas.
How about applying the catechism to LessWrong as a product development project? ;)
Heilmeier’s Catechism, a set of questions credited to George H. Heilmeier that anyone proposing a research project or product development effort should be able to answer.
“How much will it cost?” “How long will it take?” Who the hell is supposed to be able to answer that on a basic research problem?
Anyone applying for grant money. Anyone working within either the academic research community or the industrial research community or the government research community.
Gentleman scientists working on their own time and money in their ancestral manors are still free to do basic research.
Nowadays, everyone who applies for a grant.
Nowadays, no one does basic research.
The people who are running the LHC aren’t doing basic research?
More precisely: no one whose status isn’t ultra-high is allowed to do basic research without having to pretend they’re doing something else.
You can take them as a calibration exercise. “I don’t know” or “Between a week and five centuries” are answers, and the point of asking the question is that some due diligence is likely to yield a better (more discriminating) answer.
Someone who had to pick one of two “basic research problems” to fund, under constraints of finite resources, would need estimates. They can also provide some guidance to answer “How long do we stick with this before going to Plan B?”
These questions are for public proposals, not for someone considering a project by themselves. If you’re building a collider or wish to play with someone else’s collider, you’d better know how much it will cost and how long you’ll take.
Interesting, but some of the questions aren’t easy to answer.
For example if you were asking the question to someone involved in early contraception development, do you think they could of predicted what demographic/birth rate changes it would have? Similarly someone inventing a better general machine learning technique, (useful for surveillance to robot butlers) could they enumerate the variety of ways it would change the world?
For AI projects, even weak ones, I would ask how they planned to avoid the puppet problem.
The point of such “catechisms” isn’t so much to have all the answers, rather to ensure that you have divided your attention evenly among a reasonable set of questions at the outset, in an effort to avoid “motivated cognition”—focusing on the thinking you find easy or pleasant to do, as opposed to the thinking that’s necessary.
The idea is to improve at predicting your predictable failures. If this kind of thinking turns up a thorny question you don’t know how to answer, you can lay the current project aside until you have solved the thorny question, as a matter of prudent dependency management.
A related example is surgery checklists. They work (see Atul Gawande’s Better). Surgeons hate them—their motivated cognition focuses on the technically juicy bits of surgery, they feel that trivia such as checking which side limb they’re operating on is beneath them.
I’m a big believer in surgery checklists. However I’m yet to be convinced that the catechisms will be beneficial unaltered to any research project.
A lot of science is about doing experiments that we don’t know the outcomes of and serendipitously discover things. Two examples that spring to mind are superconductivity and fullerene production.
If you asked each of the discoverers to justify their research by the catechisms you probably would have got no where near the actual results. This potential for serendipity should be built into the catechisms in some way. That is the answer “For Science!” has to hold some weight, even if it is less weight than is currently ascribed to it.
Yep. IOW the catechism can be used to discriminate between “fundamental” science, so-called, and applied engineering projects.
There’s a (subtle, perhaps) difference between advocating catechisms or checklists normatively (“this is a useful standard to compare yourself to”) and prescriptively (“do it this way or do it elsehwere”). To put yet another domain on the table, inability to draw the distinction plagues the project management professional community. “Methodologies” or “processes” are too often, and inappropriately, seen as edicts rather than sources of good ideas.
How about applying the catechism to LessWrong as a product development project? ;)
Sounds like good rules of thumb, though one would think DARPA should be using something a little more formal, such as Decision Analysis methodology.
http://decision.stanford.edu/library/the-principles-and-applications-of-decision-analysis-1
For one, value of acquiring information did not make the list. Maybe this was a dumbed-down version.