I generally find this compelling, but I wonder if it proves too much about current philosophy of science and meta-science work. If people in those fields have created useful insight without themselves getting dirty with the object work of other scientific fields, then the argument proves too much. I suspect there is some such work. Additionally:
I would guess that if none of the founders have ample personal experience doing research work in a wetlab, the chance of this startup building an actually-highly-useful wetlab product drops by about an order of magnitude.
Order of magnitude seems like a lot here. My intuitive feeling is that lacking relevant experience makes you 30-80% less likely to succeed, not 90%+. I think this probability likely matters for whether some certain people should work on trying to make tools for alignment research now. I thought a bit about reference classes we could use here to get a better sense, but nothing great comes to mind. Assuming for simplicity that the rate of founder with expertise and without is around the same (incorrect I’m sure), we can look at existing products.
Mainly, I think there are not very many specific products which 5x research speed, which is (maybe) the thing we’re aiming for. Below is a brainstorm of products that seem to somewhat speed up research but they’re mostly general rather than domain specific and I don’t think any of them have that big of an increase (maybe math assistants and Google Scholar are like 1.3-2x productivity for research process overall [all the math assistants together are big gains, but each one compared to its predecessor is not that big]). It seems that you are making a claim about something for which we don’t have nice historical examples — products that speed up research by 5x. The examples I came up with seem to be mostly relatively-general-purpose, which implies that the founders don’t have strong object level knowledge of each of the fields they are helping with; but I don’t think these are very good examples. This data is also terrible because it has the selection bias of being things I could think of and I probably haven’t heard of productivity boosting things that are specific to most fields.
Brainstorming examples of products that seem to do this: Copilot and other code-assistants; general productivity boosters like task management software, word processors, etc; tools for searching literature better, like Google Scholar, Elicit, Connected Papers, etc.; math assistants like calculators, Excel, Stata, etc.; maybe some general purpose technologies like the internet and electricity count, but those feel like cheating due to not being specific products.
I don’t think this meager evidence particularly supports the claim that lacking domain knowledge decreases one’s chances of success by 90%+. I think it supports something like the following: It seems very hard to increase productivity 3x or more with a single product, and current products that increase productivity are often general-use. Those trying to revolutionize alignment research by making products to speed it up should recognize that large boosts are unlikely on base-rates.
I’m curious if anybody has examples of products that you think 3x or more the total rate of your or others’ research? Ideally this would be 3x compared to the next best thing before this thing came along. I think people tend to over-estimate productivity gains from tools, so the follow-up question is: would you rather work 9 hours a day without that tool or 3 with it; which scenario gets more work done? Is that tool responsible for 67%+ of your research output, and would it be a good deal for 67% of your salary (or however society compensates you for research output, e.g., impact credits) to be attributed to that tool? You don’t necessarily have to pass all these tests, but I am including them here because I think it’s a really bold claim to say a tool 3xed your overall research productivity. Without examples of this it’s hard to make confident claims about what it will take to make such tools in the future.
I generally find this compelling, but I wonder if it proves too much about current philosophy of science and meta-science work. If people in those fields have created useful insight without themselves getting dirty with the object work of other scientific fields, then the argument proves too much. I suspect there is some such work. Additionally:
Order of magnitude seems like a lot here. My intuitive feeling is that lacking relevant experience makes you 30-80% less likely to succeed, not 90%+. I think this probability likely matters for whether some certain people should work on trying to make tools for alignment research now. I thought a bit about reference classes we could use here to get a better sense, but nothing great comes to mind. Assuming for simplicity that the rate of founder with expertise and without is around the same (incorrect I’m sure), we can look at existing products.
Mainly, I think there are not very many specific products which 5x research speed, which is (maybe) the thing we’re aiming for. Below is a brainstorm of products that seem to somewhat speed up research but they’re mostly general rather than domain specific and I don’t think any of them have that big of an increase (maybe math assistants and Google Scholar are like 1.3-2x productivity for research process overall [all the math assistants together are big gains, but each one compared to its predecessor is not that big]). It seems that you are making a claim about something for which we don’t have nice historical examples — products that speed up research by 5x. The examples I came up with seem to be mostly relatively-general-purpose, which implies that the founders don’t have strong object level knowledge of each of the fields they are helping with; but I don’t think these are very good examples. This data is also terrible because it has the selection bias of being things I could think of and I probably haven’t heard of productivity boosting things that are specific to most fields.
Brainstorming examples of products that seem to do this: Copilot and other code-assistants; general productivity boosters like task management software, word processors, etc; tools for searching literature better, like Google Scholar, Elicit, Connected Papers, etc.; math assistants like calculators, Excel, Stata, etc.; maybe some general purpose technologies like the internet and electricity count, but those feel like cheating due to not being specific products.
I don’t think this meager evidence particularly supports the claim that lacking domain knowledge decreases one’s chances of success by 90%+. I think it supports something like the following: It seems very hard to increase productivity 3x or more with a single product, and current products that increase productivity are often general-use. Those trying to revolutionize alignment research by making products to speed it up should recognize that large boosts are unlikely on base-rates.
I’m curious if anybody has examples of products that you think 3x or more the total rate of your or others’ research? Ideally this would be 3x compared to the next best thing before this thing came along. I think people tend to over-estimate productivity gains from tools, so the follow-up question is: would you rather work 9 hours a day without that tool or 3 with it; which scenario gets more work done? Is that tool responsible for 67%+ of your research output, and would it be a good deal for 67% of your salary (or however society compensates you for research output, e.g., impact credits) to be attributed to that tool? You don’t necessarily have to pass all these tests, but I am including them here because I think it’s a really bold claim to say a tool 3xed your overall research productivity. Without examples of this it’s hard to make confident claims about what it will take to make such tools in the future.