Thanks, John, for the conversation and the write-up. Definitely great you getting something up. Here are my own add-ons to the post:
The Existing LW Questions Platform “Failed” Because of Lack of Context
I think a lot about how LessWrong can cause more intellectual progress, and more recently I’ve been thinking about why LessWrong’s existing Open Questions feature didn’t succeed at our highest hopes for it. Concretely, it hasn’t gotten existing full-time researchers outsourcing parts of their work to willing others via LessWrong. Researchers at OpenPhil, FHI, MIRI, AI Impacts, etc. don’t post questions that then get great answers via people on LessWrong going off and working for days/weeks/months.
One of the largest factors, I believe, is that in fact very difficult hard to convey the context of a research question. Why is it interesting, what kinds of answers are useful, how to go about it. At best you need to explain a large swathe of your current research project and at worst someone needs to study for months or years to understand the background. This requires more effort on the part of the question-asker than writing up a few paragraphs and possibly much more of an answerer who might then have a long reading list.
This problem of imparting context came up repeatedly in interviews I did with current LW/EA researchers.
Version 2: Research Agendas
Trying to address the problems on the Open Questions feature led me to something I’ve been calling the “Research Agendas feature” which I’ve attempted to model more closely on how research currently gets done.
“Research Agendas” are owned/worked on by a very small number of people who commit to working hard on them, in contrast with the QA forum model where a lot of people might spend a relatively small amount of attention. These people actually put in the hours to properly share context on the research via explaining and/or studying and/or talking at length.
“Research Agendas” are defined by 1) the Open Questions they’re trying to answer, 2) a methodology / paradigm that Research Agenda intends to use to define and/or answer the questions posed. In writing up a “Research Agenda”, one is expected to write up the context or at least write enough so that someone could go study and then understand.
If “Research Agendas” caught on, the way you’d know what some researcher was up to is you’d go read their open “Research Agendas” where they explain what they’re trying to do. Others could potentially join in.
Open Questions → Research Agendas → Paradigms
If you bundle up enough questions into “Research Agendas” that share common context, presumed methods, and a sense of what the answers look like–and if the questions are compelling enough–I think you get on track to having a shared paradigm where broadly people have shared context: a sense of what’s trying to be answered, how to go about it, and what success looks like. Conversely, they don’t need to keep rehashing the fundamentals each time.
I think John makes it sound a bit too easy. To get a whole paradigm going, I think you need some deep Open Questions that generate enough work to keep busy for a while, and I think those questions need to be quite compelling. I’m going mostly off what Kuhn himself wrote, popularizer of this “paradigm” notion:
In this essay, ‘normal science’ means research firmly based upon one or more past scientific achievements, achievements that some particular scientific community acknowledges for a time as supplying the foundation for its further practice. Today such achievements are recounted, though seldom in their original form, by science textbooks, elementary and advanced. These textbooks expound the body of accepted theory, illustrate many or all of its successful applications, and compare these applications with exemplary observations and experiments. Before such books became popular early in the nineteenth century (and until even more recently in the newly matured sciences), many of the famous classics of science fulfilled a similar function. Aristotle’s Physica, Ptolemy’s Almagest, Newton’s Principia and Opticks, Franklin’s Electricity, Lavoisier’s Chemistry, and Lyell’s Geology—these and many other works served for a time implicitly to define the legitimate problems and methods of a research field for succeeding generations of practitioners. They were able to do so because they shared two essential characteristics. Their achievement was sufficiently unprecedented to attract an enduring group of adherents away from competing modes of scientific activity. Simultaneously, it was sufficiently open-ended to leave all sorts of problems for the redefined group of practitioners to resolve.
Achievements that share these two characteristics I shall henceforth refer to as ‘paradigms,’ a term that relates closely to ‘normal science.’ By choosing it, I mean to suggest that some accepted examples of actual scientific practice—examples which include law, theory, application, and instrumentation together—provide models from which spring particular coherent traditions of scientific research. These are the traditions which the historian describes under such rubrics as ‘Ptolemaic astronomy’ (or ‘Copernican’), ‘Aristotelian dynamics’ (or ‘Newtonian’), ‘corpuscular optics’ (or ‘wave optics’), and so on. The study of paradigms, including many that are far more specialized than those named illustratively above, is what mainly prepares the student for membership in the particular scientific community with which he will later practice. Because he there joins men who learned the bases of their field from the same concrete models, his subsequent practice will seldom evoke overt disagreement over fundamentals. Men whose research is based on shared paradigms are committed to the same rules and standards for scientific practice. That commitment and the apparent consensus it produces are prerequisites for normal science, i.e., for the genesis and continuation of a particular research tradition.
Kuhn, Thomas S.. The Structure of Scientific Revolutions (pp. 10-11). University of Chicago Press. Kindle Edition. [Emphasis added]
One comment I made in response to John’s draft is that, following Kuhn, you probably need more than writing skill and mimetic reach–you need to be building a scientific achievement that’s recognizable enough to people as striking at what they really care about such that they stop what they’re doing and come do it your way.
Embedded Agency might actually achieve that, but I’m led to believe it wasn’t a small feat.
Methodology
Another comment is that I think shared methodology needs emphasis. It fits to lump methodology a bit under the Open Question definition but it’s large enough to highlight as crucial to establishing paradigms.
Achieving Paradigm-genesis
As far as I can, most of the research of interest to the LW/EA cluster is in a pre-paradigmatic state. There’s an undercurrent of shared epistemic approach and people are trying to innovate (one, two, three), but there’s no sense of “these are the questions we need to answer, this what an answer looks like, and this is what you should do to get that answer”. Existing methods and standards of analysis in history and sociology probably don’t cut it for us, but we’re not mature enough to have our own. Related, we’ve got proliferating schools of AI Alignment/Safety (can’t even agree on the name, geeze).
I should clarify, we want multiple paradigms for multiple different problems we tackle. Predicting the rate of technological progress is a different task to developing a provably safe AGI design.
I don’t think reaching enough consensus to form paradigms will be quick, but I’m hopeful that if we can more clearly communicate the problems we’re tackling, how we’re doing it, and the results we’re getting, then we’re on track to building ourselves little paradigms and sub-paradigms that make your thoughts precise and greatly accelerate work (up until the point you discover where you paradigm was broken from the beginning).
Communities evolve around shared methodology because of the surprisingly detail rich nature of reality. Methodology has a lot of degrees of freedom. This then creates correlated blind spots/echo chambers.
One way I like to think about this is common data formats. Hard research problems create new data formats suited to the problem. But if the researchers aren’t also doing the extra work to maintain backwards compatibility then they won’t notice when they start rejecting things for not already being in their preferred format. And for reasonable reasons! Research time can be precious. Especially in the environment of artificial scarcity that donors en masse believe is healthy.
I’m noticing what might be a miscommunication/misunderstanding between your comment and the post and Kuhn. It’s not that the statement of such open problems creates the paradigm; it’s that solutions to those problems creates the paradigm.
The problems exist because the old paradigms (concepts, methods etc) can’t solve them. If you can state some open problems such that everyone agrees that those problems matter, and whose solution could be verified by the community, then you’ve gotten a setup for solutions to create a new paradigm. A solution will necessarily use new concepts and methods. If accepted by the community, these concepts and methods constitute the new paradigm.
(Even this doesn’t always work if the techniques can’t be carried over to further problems and progress. For example, my impression is that Logical Induction nailed the solution to a legitimately important open problem, but it does not seem that the solution has been of a kind which could be used for further progress.)
Thanks, John, for the conversation and the write-up. Definitely great you getting something up. Here are my own add-ons to the post:
The Existing LW Questions Platform “Failed” Because of Lack of Context
I think a lot about how LessWrong can cause more intellectual progress, and more recently I’ve been thinking about why LessWrong’s existing Open Questions feature didn’t succeed at our highest hopes for it. Concretely, it hasn’t gotten existing full-time researchers outsourcing parts of their work to willing others via LessWrong. Researchers at OpenPhil, FHI, MIRI, AI Impacts, etc. don’t post questions that then get great answers via people on LessWrong going off and working for days/weeks/months.
One of the largest factors, I believe, is that in fact very difficult hard to convey the context of a research question. Why is it interesting, what kinds of answers are useful, how to go about it. At best you need to explain a large swathe of your current research project and at worst someone needs to study for months or years to understand the background. This requires more effort on the part of the question-asker than writing up a few paragraphs and possibly much more of an answerer who might then have a long reading list.
This problem of imparting context came up repeatedly in interviews I did with current LW/EA researchers.
Version 2: Research Agendas
Trying to address the problems on the Open Questions feature led me to something I’ve been calling the “Research Agendas feature” which I’ve attempted to model more closely on how research currently gets done.
“Research Agendas” are owned/worked on by a very small number of people who commit to working hard on them, in contrast with the QA forum model where a lot of people might spend a relatively small amount of attention. These people actually put in the hours to properly share context on the research via explaining and/or studying and/or talking at length.
“Research Agendas” are defined by 1) the Open Questions they’re trying to answer, 2) a methodology / paradigm that Research Agenda intends to use to define and/or answer the questions posed. In writing up a “Research Agenda”, one is expected to write up the context or at least write enough so that someone could go study and then understand.
If “Research Agendas” caught on, the way you’d know what some researcher was up to is you’d go read their open “Research Agendas” where they explain what they’re trying to do. Others could potentially join in.
Open Questions → Research Agendas → Paradigms
If you bundle up enough questions into “Research Agendas” that share common context, presumed methods, and a sense of what the answers look like–and if the questions are compelling enough–I think you get on track to having a shared paradigm where broadly people have shared context: a sense of what’s trying to be answered, how to go about it, and what success looks like. Conversely, they don’t need to keep rehashing the fundamentals each time.
I think John makes it sound a bit too easy. To get a whole paradigm going, I think you need some deep Open Questions that generate enough work to keep busy for a while, and I think those questions need to be quite compelling. I’m going mostly off what Kuhn himself wrote, popularizer of this “paradigm” notion:
One comment I made in response to John’s draft is that, following Kuhn, you probably need more than writing skill and mimetic reach–you need to be building a scientific achievement that’s recognizable enough to people as striking at what they really care about such that they stop what they’re doing and come do it your way.
Embedded Agency might actually achieve that, but I’m led to believe it wasn’t a small feat.
Methodology
Another comment is that I think shared methodology needs emphasis. It fits to lump methodology a bit under the Open Question definition but it’s large enough to highlight as crucial to establishing paradigms.
Achieving Paradigm-genesis
As far as I can, most of the research of interest to the LW/EA cluster is in a pre-paradigmatic state. There’s an undercurrent of shared epistemic approach and people are trying to innovate (one, two, three), but there’s no sense of “these are the questions we need to answer, this what an answer looks like, and this is what you should do to get that answer”. Existing methods and standards of analysis in history and sociology probably don’t cut it for us, but we’re not mature enough to have our own. Related, we’ve got proliferating schools of AI Alignment/Safety (can’t even agree on the name, geeze).
I should clarify, we want multiple paradigms for multiple different problems we tackle. Predicting the rate of technological progress is a different task to developing a provably safe AGI design.
I don’t think reaching enough consensus to form paradigms will be quick, but I’m hopeful that if we can more clearly communicate the problems we’re tackling, how we’re doing it, and the results we’re getting, then we’re on track to building ourselves little paradigms and sub-paradigms that make your thoughts precise and greatly accelerate work (up until the point you discover where you paradigm was broken from the beginning).
Communities evolve around shared methodology because of the surprisingly detail rich nature of reality. Methodology has a lot of degrees of freedom. This then creates correlated blind spots/echo chambers. One way I like to think about this is common data formats. Hard research problems create new data formats suited to the problem. But if the researchers aren’t also doing the extra work to maintain backwards compatibility then they won’t notice when they start rejecting things for not already being in their preferred format. And for reasonable reasons! Research time can be precious. Especially in the environment of artificial scarcity that donors en masse believe is healthy.
I’m noticing what might be a miscommunication/misunderstanding between your comment and the post and Kuhn. It’s not that the statement of such open problems creates the paradigm; it’s that solutions to those problems creates the paradigm.
The problems exist because the old paradigms (concepts, methods etc) can’t solve them. If you can state some open problems such that everyone agrees that those problems matter, and whose solution could be verified by the community, then you’ve gotten a setup for solutions to create a new paradigm. A solution will necessarily use new concepts and methods. If accepted by the community, these concepts and methods constitute the new paradigm.
(Even this doesn’t always work if the techniques can’t be carried over to further problems and progress. For example, my impression is that Logical Induction nailed the solution to a legitimately important open problem, but it does not seem that the solution has been of a kind which could be used for further progress.)