I agree with the general sentiment that paying attention to group optimality, not just individual optimality, can be very important.
However, I am a bit skeptical of giving this too much importance when thinking about your research.
If we’re all doing what’s collectively best, we must personally be doing what gives us the highest expectation of contributing (not of getting credit, but of contributing). If this were not the case, then it follows that there is at least one single person who could change their strategy to have a better chance of contributing. So “”“in an appropriate sense””” we should still do what’s best for our personal research.
It does not follow that if everyone is doing what seems personally best for their research, the group is following a collectively optimal path. However, I think it’s somewhat hard to produce a counterexample which doesn’t involve strategizing about who gets the credit.
Instead, we should collectively act according to the rule that maximises the chance that someone in the community discovers the best idea.
Here’s a simplistic argument that you’re wrong: the “only way” to help create good ideas is by having good ideas. This isn’t really true (for example, I might trigger your bright idea by some innocuous action). However, it seems mostly true in the realm of purposeful research.
Anyway, with respect to IDA*, I’m curious exactly what you meant by “first order”.
I don’t yet see why IDA* is unsuitable in the multi-researcher context. You can set up the search problem to be a sub-problem that’s been farmed out to you, if you’re part of a larger research program which is organized by farming out sub-questions. You can integrate information from other people in your search-tree, EG updating nodes with higher heuristic values if other people think they’re promising. You can use an IDA* tree to coordinate a parallelized search, instead of a purely sequential search like IDA*. (Perhaps this is the change you’re trying to point at?)
Some related questions:
How much should you focus on reading what other people do, vs doing your own things?
How much should you focus on communicating your word vs doing it?
How much should you avoid what other people are working on?
Should you randomize at all? How? (Randomization can help reduce redundant work in many contexts.)
Should prioritize subquestions other people think are important, or the ones that you think are most important?
I worry for people who are only reading other people’s work, like they have to “catch up” to everyone else before they have any original thoughts of their own. I also worry for people who don’t read very much, but I think, not nearly as much. The original-thought-havers still have a chance of producing a critical insight. The diligent readers have less of a chance.
I worry for people who work only on other people’s questions, because by the time you answer those questions, the original author may have moved on. Working on your own questions, which you think are important for your own reasons, provides more of a (still very fallible!) guarantee that your work will be useful somehow. At least there is someone somewhere who will think it useful. If you are working on other people’s questions, there is no such guarantee. It’s too easy to misunderstand the original purpose of the question, and do work that technically satisfies it but really doesn’t do what was wanted in a broader context.
I’m curious exactly what you meant by “first order”.
Just that the trade-off is only present if you think of “individual rationality” as “let’s forget that I’m part of a community for a moment”. All things considered, there’s just rationality, and you should do what’s optimal.
First-order: Everyone thinks that maximizing insight production means doing IDA* over idea tree. Second-order: Everyone notices that everyone will think that, so it’s no longer optimal for maximizing insights produces overall. Everyone wants to coordinate with everyone else in order to parallelize their search (assuming they care about the total sum of insights produced). You can still do something like IDA* over your sub-branches.
This may have answered some of your other questions. Assuming you care about the alignment problem being solved, maximizing your expected counterfactual thinking-contribution means you should coordinate with your research community.
And, as you note, maximizing personal credit is unaligned as a separate matter. But if we’re all motivated by credit, our coordination can break down by people defecting to grab credit.
How much should you focus on reading what other people do, vs doing your own things?
This is not yet at practical level, but: Let’s say we want to approach something like a community-wide optimal trade-off between exploring and exploiting, and we can’t trivially check what everyone else is up to. If we think the optimum is something obviously silly like “75% of researchers should Explore, and the rest should Exploit,” and I predict that 50% of researchers will follow the rule I follow, and all the uncoordinated researchers will all Exploit, then it is rational for me to randomize my decision with a coinflip.
It gets newcomblike when I can’t check, but I can still follow a mix that’s optimal given an expected number of cooperating researchers and what I predict they will predict in turn. If predictions are similar, the optimum given those predictions is a Schelling point. Of course, in the real world, if you actually had important practical strategies for optimizing community-level research strategies, you would just write it up and get everyone to coordinate that way.
I worry for people who are only reading other people’s work, like they have to “catch up” to everyone else before they have any original thoughts of their own.
You touch on many things I care about. Part (not the main part) of why I want people to prioritize searching neglected nodes more is because Einstellung is real. Once you’ve got a tool in your brain, you’re not going to know how to not use it, and it’ll be harder to think of alternatives. You want to increase your chance of attaining neglected tools and perspectives to attack long-standing open problems with. After all, if the usual tools were sufficient, why are they long-standing open problems? If you diverge from the most common learning paths early, you’re more likely to end up with a productively different perspective.
It’s too easy to misunderstand the original purpose of the question, and do work that technically satisfies it but really doesn’t do what was wanted in a broader context.
I agree with the general sentiment that paying attention to group optimality, not just individual optimality, can be very important.
However, I am a bit skeptical of giving this too much importance when thinking about your research.
If we’re all doing what’s collectively best, we must personally be doing what gives us the highest expectation of contributing (not of getting credit, but of contributing). If this were not the case, then it follows that there is at least one single person who could change their strategy to have a better chance of contributing. So “”“in an appropriate sense””” we should still do what’s best for our personal research.
It does not follow that if everyone is doing what seems personally best for their research, the group is following a collectively optimal path. However, I think it’s somewhat hard to produce a counterexample which doesn’t involve strategizing about who gets the credit.
Here’s a simplistic argument that you’re wrong: the “only way” to help create good ideas is by having good ideas. This isn’t really true (for example, I might trigger your bright idea by some innocuous action). However, it seems mostly true in the realm of purposeful research.
Anyway, with respect to IDA*, I’m curious exactly what you meant by “first order”.
I don’t yet see why IDA* is unsuitable in the multi-researcher context. You can set up the search problem to be a sub-problem that’s been farmed out to you, if you’re part of a larger research program which is organized by farming out sub-questions. You can integrate information from other people in your search-tree, EG updating nodes with higher heuristic values if other people think they’re promising. You can use an IDA* tree to coordinate a parallelized search, instead of a purely sequential search like IDA*. (Perhaps this is the change you’re trying to point at?)
Some related questions:
How much should you focus on reading what other people do, vs doing your own things?
How much should you focus on communicating your word vs doing it?
How much should you avoid what other people are working on?
Should you randomize at all? How? (Randomization can help reduce redundant work in many contexts.)
Should prioritize subquestions other people think are important, or the ones that you think are most important?
I worry for people who are only reading other people’s work, like they have to “catch up” to everyone else before they have any original thoughts of their own. I also worry for people who don’t read very much, but I think, not nearly as much. The original-thought-havers still have a chance of producing a critical insight. The diligent readers have less of a chance.
I worry for people who work only on other people’s questions, because by the time you answer those questions, the original author may have moved on. Working on your own questions, which you think are important for your own reasons, provides more of a (still very fallible!) guarantee that your work will be useful somehow. At least there is someone somewhere who will think it useful. If you are working on other people’s questions, there is no such guarantee. It’s too easy to misunderstand the original purpose of the question, and do work that technically satisfies it but really doesn’t do what was wanted in a broader context.
Just that the trade-off is only present if you think of “individual rationality” as “let’s forget that I’m part of a community for a moment”. All things considered, there’s just rationality, and you should do what’s optimal.
First-order: Everyone thinks that maximizing insight production means doing IDA* over idea tree. Second-order: Everyone notices that everyone will think that, so it’s no longer optimal for maximizing insights produces overall. Everyone wants to coordinate with everyone else in order to parallelize their search (assuming they care about the total sum of insights produced). You can still do something like IDA* over your sub-branches.
This may have answered some of your other questions. Assuming you care about the alignment problem being solved, maximizing your expected counterfactual thinking-contribution means you should coordinate with your research community.
And, as you note, maximizing personal credit is unaligned as a separate matter. But if we’re all motivated by credit, our coordination can break down by people defecting to grab credit.
This is not yet at practical level, but: Let’s say we want to approach something like a community-wide optimal trade-off between exploring and exploiting, and we can’t trivially check what everyone else is up to. If we think the optimum is something obviously silly like “75% of researchers should Explore, and the rest should Exploit,” and I predict that 50% of researchers will follow the rule I follow, and all the uncoordinated researchers will all Exploit, then it is rational for me to randomize my decision with a coinflip.
It gets newcomblike when I can’t check, but I can still follow a mix that’s optimal given an expected number of cooperating researchers and what I predict they will predict in turn. If predictions are similar, the optimum given those predictions is a Schelling point. Of course, in the real world, if you actually had important practical strategies for optimizing community-level research strategies, you would just write it up and get everyone to coordinate that way.
You touch on many things I care about. Part (not the main part) of why I want people to prioritize searching neglected nodes more is because Einstellung is real. Once you’ve got a tool in your brain, you’re not going to know how to not use it, and it’ll be harder to think of alternatives. You want to increase your chance of attaining neglected tools and perspectives to attack long-standing open problems with. After all, if the usual tools were sufficient, why are they long-standing open problems? If you diverge from the most common learning paths early, you’re more likely to end up with a productively different perspective.
I’ve taken to calling this “bandwidth”, cf. Owen Cotton-Barratt.