So I agree with some of what you’re saying along “There is such a thing as a generally useful algorithm” or “Some skills are more deep than others” but I’m dubious about some of the consequences I think that you think follow from them? Or maybe you don’t think these consequences follow, idk, and I’m imagining a person? Let me try to clarify.
There’s clusters of habits that seem pretty useful for solving novel problems
My expectation is that there are many skills / mental algorithms along these lines, such that you could truthfully say “Wow, people in diverse domains have found X mental algorithm useful for discovering new knowledge.” But also I think it’s probably true that the actually shared information between different domain-specific instances of “X mental algorithm” is going to be pretty small.
Like, take the skill of “breaking down skills into subskills, figuring out what subskills can be worked on, etc”. I think there’s probably some kind of of algorithm you can run cross-domain that does this kind of thing. But without domain-specific pruning heuristics, and like a ton of domain-specific details, I expect that this algorithm basically just spits back “Well, too many options” rather than anything useful.
So: I expect non-domain specific work put into sharpening up this algorithm to run into steeply diminishing returns, even if you can amortize the cost of sharpening up the algorithm across many different domains that would be benefitted. If you could write down a program that can help you find relevant subskills in some domain, about 95% of the program is going to be domain-specific rather than not domain specific, and there are something like only ~logarithmic returns to working on the domain-specific problem. (Not being precise, just an intuition)
Put alternately, I expect you could specify some kind of algorithm like this in a very short mental program, but when you’re running the program most mental compute goes into finding domain-specific program details.
Let me just describe the way the world looks to me. Maybe we actually think the same thing?
-- If you look throughout the history of science, I think that most discoveries look less like “Discoverer had good meta-level principles that let them situate themselves in the right place to solve the issue” and more like “Discoverer happened to be interested in the right chunk of reality that let them figure out an important problem, but it was mostly luck in situating themselves or their skills in this place.” I haven’t read a ton of history of science, but yeah.
-- Concretely, my bet is that most (many?) scientific discoverers of important things were extremely wrong on other important things, or found their original discovery through something like luck. (And some very important discoveries (Transformers) weren’t really identified as such at the time.)
-- Or, concretely, I think scientific progress overall probably hinges less on individual scientists having good meta-level principles, and more on like...whatever social phenomena is necessary to let individuals or groups of scientists run a distributed brute-force search. Extremely approximately.
-- So my belief is that so far we humans just haven’t found any such principles like those you’re seeking for. Or that a lack of such principles can screw over your group (if you eschew falsifiability to a certain degree you’re fucked; if you ignore math you’re fucked) but that you can ultimately mostly raise the floor rather than the ceiling through work on them. Like there is a lot of math out there, and different kinds are very useful for different things!
-- I would be super excited to find such meta-level principles, btw. I feel like I’m being relentlessly negative. So to be clear, it would be awesome to find substantive meta-level principles such that non-domain specific work on the meta-level principles could help people situate themselves and pursue work effectively in confusing domains. Like I’m talking about this because I am very much interested in the project. I just right now… don’t think the world looks like they exist? It’s just in that in the absence of seeing groups that seem to have such principles, nothing that I know about minds in general makes me think that such principles are likely.
Or maybe I’m just confused about what you’re doing. Really uncertain about all the above.
I totally agree with how science normally works. I’m sitting here being like “whelp, doesn’t seem like the way science normally works can solve the problems I care about in time.”
It’s a serious question on my end “can I raise the ceiling, or just the floor?” and “Does raising the floor matter?”. Thinking about that led to me re-examining “can I actually help senior researchers?”, and feeling like I had at least some traction on that, which output the “Help Senior Researchers with Targeted Problems”, which indeed feels most important insofar as it’s tractable.
My sense is that most senior researchers at least “know, and sometimes think about, all the meta-level principles I’ve thought about so far.” But, they don’t always keep them in their “context window”. Some things I current expect (at least some) senior researchers to not being attending to enough:
not actually maximizing their working memory tools.
not consistently steering towards the most hard-and-uncertain-but-important parts of their problem, so they can falsify early and move on to the next idea
relatedly: pursuing things that are shiny and nerdsnipy.
not attending much to “deliberately cultivate their meta-strategies”, even in ways that just make sense to them. (My guess is often they’ll have decent taste for what they should do more of, if prompted, but they don’t prompt themselves to think about it as often as is optimal
Also, I think a bunch of them have various executive dysfunction stuff or health issues, which isn’t what I’m currently focused on but seems important.
(note: I think “pursue things that are shiny/nerdsnipy” is an important motivational system that I’m not sure how to engage with, without breaking important things. But, my guess here is something similar to “if you want to marry into wealth, hang out around rich people and then marry for love”. i.e. sink your attention into places where the shiny nerdsnipy problems are important, and then pick research directions based off excitement)
So I agree with some of what you’re saying along “There is such a thing as a generally useful algorithm” or “Some skills are more deep than others” but I’m dubious about some of the consequences I think that you think follow from them? Or maybe you don’t think these consequences follow, idk, and I’m imagining a person? Let me try to clarify.
There’s clusters of habits that seem pretty useful for solving novel problems
My expectation is that there are many skills / mental algorithms along these lines, such that you could truthfully say “Wow, people in diverse domains have found X mental algorithm useful for discovering new knowledge.” But also I think it’s probably true that the actually shared information between different domain-specific instances of “X mental algorithm” is going to be pretty small.
Like, take the skill of “breaking down skills into subskills, figuring out what subskills can be worked on, etc”. I think there’s probably some kind of of algorithm you can run cross-domain that does this kind of thing. But without domain-specific pruning heuristics, and like a ton of domain-specific details, I expect that this algorithm basically just spits back “Well, too many options” rather than anything useful.
So: I expect non-domain specific work put into sharpening up this algorithm to run into steeply diminishing returns, even if you can amortize the cost of sharpening up the algorithm across many different domains that would be benefitted. If you could write down a program that can help you find relevant subskills in some domain, about 95% of the program is going to be domain-specific rather than not domain specific, and there are something like only ~logarithmic returns to working on the domain-specific problem. (Not being precise, just an intuition)
Put alternately, I expect you could specify some kind of algorithm like this in a very short mental program, but when you’re running the program most mental compute goes into finding domain-specific program details.
Let me just describe the way the world looks to me. Maybe we actually think the same thing?
-- If you look throughout the history of science, I think that most discoveries look less like “Discoverer had good meta-level principles that let them situate themselves in the right place to solve the issue” and more like “Discoverer happened to be interested in the right chunk of reality that let them figure out an important problem, but it was mostly luck in situating themselves or their skills in this place.” I haven’t read a ton of history of science, but yeah.
-- Concretely, my bet is that most (many?) scientific discoverers of important things were extremely wrong on other important things, or found their original discovery through something like luck. (And some very important discoveries (Transformers) weren’t really identified as such at the time.)
-- Or, concretely, I think scientific progress overall probably hinges less on individual scientists having good meta-level principles, and more on like...whatever social phenomena is necessary to let individuals or groups of scientists run a distributed brute-force search. Extremely approximately.
-- So my belief is that so far we humans just haven’t found any such principles like those you’re seeking for. Or that a lack of such principles can screw over your group (if you eschew falsifiability to a certain degree you’re fucked; if you ignore math you’re fucked) but that you can ultimately mostly raise the floor rather than the ceiling through work on them. Like there is a lot of math out there, and different kinds are very useful for different things!
-- I would be super excited to find such meta-level principles, btw. I feel like I’m being relentlessly negative. So to be clear, it would be awesome to find substantive meta-level principles such that non-domain specific work on the meta-level principles could help people situate themselves and pursue work effectively in confusing domains. Like I’m talking about this because I am very much interested in the project. I just right now… don’t think the world looks like they exist? It’s just in that in the absence of seeing groups that seem to have such principles, nothing that I know about minds in general makes me think that such principles are likely.
Or maybe I’m just confused about what you’re doing. Really uncertain about all the above.
I totally agree with how science normally works. I’m sitting here being like “whelp, doesn’t seem like the way science normally works can solve the problems I care about in time.”
It’s a serious question on my end “can I raise the ceiling, or just the floor?” and “Does raising the floor matter?”. Thinking about that led to me re-examining “can I actually help senior researchers?”, and feeling like I had at least some traction on that, which output the “Help Senior Researchers with Targeted Problems”, which indeed feels most important insofar as it’s tractable.
My sense is that most senior researchers at least “know, and sometimes think about, all the meta-level principles I’ve thought about so far.” But, they don’t always keep them in their “context window”. Some things I current expect (at least some) senior researchers to not being attending to enough:
not actually maximizing their working memory tools.
not consistently steering towards the most hard-and-uncertain-but-important parts of their problem, so they can falsify early and move on to the next idea
relatedly: pursuing things that are shiny and nerdsnipy.
not attending much to “deliberately cultivate their meta-strategies”, even in ways that just make sense to them. (My guess is often they’ll have decent taste for what they should do more of, if prompted, but they don’t prompt themselves to think about it as often as is optimal
Also, I think a bunch of them have various executive dysfunction stuff or health issues, which isn’t what I’m currently focused on but seems important.
(note: I think “pursue things that are shiny/nerdsnipy” is an important motivational system that I’m not sure how to engage with, without breaking important things. But, my guess here is something similar to “if you want to marry into wealth, hang out around rich people and then marry for love”. i.e. sink your attention into places where the shiny nerdsnipy problems are important, and then pick research directions based off excitement)