I naturally tend to make progress by focusing on one thing for long periods. But I also want to ensure that I keep exploring a variety of things. So I generally let myself stay on one thing on the order of a few days, and I do periodic reviews to make sure I endorse how that ends up averaging out. Currently I’ve spent the last couple weeks almost entirely focusing on bullet points 3 and 4, and I would feel some anxiety about that if I couldn’t look back through my log and see that I’ve spent entire previous weeks doing nothing on 3 and 4 and instead focusing on 1 and 5.
I have some intuitive sense that agent foundations is important, but also some intuitive worry of “man how would we know if we were making real progress?”.
I do think in some sense it’s reasonable/expected to take a few years to see results but a thing on my mind these days are “can we do better than ‘a few years’ in terms of feedback loops?”
I worry that overemphasizing fast feedback loops ends up making projects more myopic than is good for novel research. Like, unfortunately for everyone, a bunch of what makes good research is good research taste, and no one really understands what that is or how to get it and so it’s tricky to design feedback loops to make it better. Like, I think the sense of “there’s something interesting here that no one else is seeing” is sort of inherently hard to get feedback on, at least from other people. Because if you could, or if it were easy to explain from the get-go, then probably other people would have already thought it. You can maybe get feedback from the world, but it often takes awhile to turn that sense into something concrete. E.g., Einstein was already onto the idea that there was something strange about light and relativity from when he was 16, but he didn’t have a shippable idea until about ten years later.
I don’t think it always takes ten years, but deconfusion work is just… weird, e.g., Einstein was prone to bouts of “psychic tension,” and would spend weeks in a “state of confusion.” He wasn’t part of academia, shared his thoughts with few people, and did very little experimentation. Which isn’t to say that there was literally no feedback involved, there was, but it’s a complicated story which I think unfortunately involves a bunch of feedback directly from his research taste (i.e., his taste was putting constraints like “unity,” “logical,” “physical meaning” on the hypothesis space). And this certainly isn’t to say “let’s not try to make it go faster,” obviously, if one could make research faster that seems great. But I think it’s a careful balancing act, and I worry that putting too much pressure on speed and legibility is going to end up causing people to do science under the streetlight. I really do not want this to happen. Field founding science is a bunch weirder than normal science, and I want to take care in giving research taste enough space to find its feet.
I agree there’s a risk of overemphasizing fast-feedback loops that damages science.
My current belief is that gaining research taste (or b is something that shouldn’t be that mysterious, and mostly it seems to be something that
does require quite a bit of effort (which is why I think it isn’t done by default)
also requires at least some decent meta-taste on how to gain taste (but, my guess is Alex Altair in particular has enough of this to navigate it)
And.. meanwhile I feel like we just don’t have the luxury of not at least trying on this axis to some degree.
(I don’t know that I can back up this statement very much, this seems to be a research vein I currently believe in that no one else currently seems to)
It is plausible to me (based on things like Alex’s comment on this other post you recently responded to, and other convos with him) that Alex-in-particular is already basically doing all the things that make sense to do here.
But, like, looking here:
But I think it’s a careful balancing act, and I worry that putting too much pressure on speed and legibility is going to end up causing people to do science under the streetlight. I really do not want this to happen. Field founding science is a bunch weirder than normal science, and I want to take care in giving research taste enough space to find its feet.
I think the amount I’m pushing for this here is “at all”, and it feels premature to me to jump to “this will ruin the research process”.
A question that comes to mind here is “what are your feedback loops?” You do mention this a bit here:
I have some intuitive sense that agent foundations is important, but also some intuitive worry of “man how would we know if we were making real progress?”.
I do think in some sense it’s reasonable/expected to take a few years to see results but a thing on my mind these days are “can we do better than ‘a few years’ in terms of feedback loops?”
I worry that overemphasizing fast feedback loops ends up making projects more myopic than is good for novel research. Like, unfortunately for everyone, a bunch of what makes good research is good research taste, and no one really understands what that is or how to get it and so it’s tricky to design feedback loops to make it better. Like, I think the sense of “there’s something interesting here that no one else is seeing” is sort of inherently hard to get feedback on, at least from other people. Because if you could, or if it were easy to explain from the get-go, then probably other people would have already thought it. You can maybe get feedback from the world, but it often takes awhile to turn that sense into something concrete. E.g., Einstein was already onto the idea that there was something strange about light and relativity from when he was 16, but he didn’t have a shippable idea until about ten years later.
I don’t think it always takes ten years, but deconfusion work is just… weird, e.g., Einstein was prone to bouts of “psychic tension,” and would spend weeks in a “state of confusion.” He wasn’t part of academia, shared his thoughts with few people, and did very little experimentation. Which isn’t to say that there was literally no feedback involved, there was, but it’s a complicated story which I think unfortunately involves a bunch of feedback directly from his research taste (i.e., his taste was putting constraints like “unity,” “logical,” “physical meaning” on the hypothesis space). And this certainly isn’t to say “let’s not try to make it go faster,” obviously, if one could make research faster that seems great. But I think it’s a careful balancing act, and I worry that putting too much pressure on speed and legibility is going to end up causing people to do science under the streetlight. I really do not want this to happen. Field founding science is a bunch weirder than normal science, and I want to take care in giving research taste enough space to find its feet.
I agree there’s a risk of overemphasizing fast-feedback loops that damages science.
My current belief is that gaining research taste (or b is something that shouldn’t be that mysterious, and mostly it seems to be something that
does require quite a bit of effort (which is why I think it isn’t done by default)
also requires at least some decent meta-taste on how to gain taste (but, my guess is Alex Altair in particular has enough of this to navigate it)
And.. meanwhile I feel like we just don’t have the luxury of not at least trying on this axis to some degree.
(I don’t know that I can back up this statement very much, this seems to be a research vein I currently believe in that no one else currently seems to)
It is plausible to me (based on things like Alex’s comment on this other post you recently responded to, and other convos with him) that Alex-in-particular is already basically doing all the things that make sense to do here.
But, like, looking here:
I think the amount I’m pushing for this here is “at all”, and it feels premature to me to jump to “this will ruin the research process”.