Clem here—I was fellowship lead this year and have been a research affiliate and mentor for PIBBSS in the past. Thanks for posting this. As might be expected in my position, I’m much more bullish than you / most people on what is often called “blue sky” research. Breakthroughs in the our fundamental understanding of agency, minds, learning etc. seem valuable in a range of scenarios, not just in world dominated by an “intelligence explosion”. In particular, I think that this kind of work (a) benefits a huge amount from close engagement with empirical work, and (b) itself is very likely to inform near-future prosaic work. Furthermore, I feel that progress on these questions is genuinely possible, and is made significantly more likely with more people working on it from as many perspectives as possible.
This said, I think two things you say under “reservations” I strongly agree with, and have some comments on.
> I encourage PIBBSS to “embrace the weird,” albeit while maintaining high academic standards for basic research, modelled off the best basic science institutions.
There are worlds where furthering our understanding of deeply confused basic concepts that underpin everything else we do isn’t considered “the weird”, but given that we’re not in these worlds I have to agree. One big issues I see here is that doing this well requires marrying the better parts of academic culture with the better parts of tech / rationality culture (and yes, for this purpose I place those in the same bucket). Some of the places that I think do this best—e.g google’s paradigms of intelligence team—have a culture / belief system somewhat incompatible with EA. It’s worth noting that people often pursue basic questions for very different reasons.
> I strongly encourage PIBBSS to publicly post and seek feedback on their applicant selection and research prioritization processes, so that the AI safety ecosystem can offer useful insight (and benefit from this).
I think this is actually really important, and it’s not something that I think PIBBSS does very well currently. One thing I would note is that, for reasons sketched above, I think it’s important that the AI safety ecosystem aren’t the only people interacting with this. One thing that’s holding things back here is, in my view, a venue for this kind of research whose scope is, primarily, the basic research. This is not to say that the relevance and impact for safety shouldn’t be a primary concern in research prioritisation—I think it very much should be—but I do think this can be done in a way that is more compatible with academic norms (at least those academic norms that are worth upholding).
Clem here—I was fellowship lead this year and have been a research affiliate and mentor for PIBBSS in the past. Thanks for posting this. As might be expected in my position, I’m much more bullish than you / most people on what is often called “blue sky” research. Breakthroughs in the our fundamental understanding of agency, minds, learning etc. seem valuable in a range of scenarios, not just in world dominated by an “intelligence explosion”. In particular, I think that this kind of work (a) benefits a huge amount from close engagement with empirical work, and (b) itself is very likely to inform near-future prosaic work. Furthermore, I feel that progress on these questions is genuinely possible, and is made significantly more likely with more people working on it from as many perspectives as possible.
This said, I think two things you say under “reservations” I strongly agree with, and have some comments on.
> I encourage PIBBSS to “embrace the weird,” albeit while maintaining high academic standards for basic research, modelled off the best basic science institutions.
There are worlds where furthering our understanding of deeply confused basic concepts that underpin everything else we do isn’t considered “the weird”, but given that we’re not in these worlds I have to agree. One big issues I see here is that doing this well requires marrying the better parts of academic culture with the better parts of tech / rationality culture (and yes, for this purpose I place those in the same bucket). Some of the places that I think do this best—e.g google’s paradigms of intelligence team—have a culture / belief system somewhat incompatible with EA. It’s worth noting that people often pursue basic questions for very different reasons.
> I strongly encourage PIBBSS to publicly post and seek feedback on their applicant selection and research prioritization processes, so that the AI safety ecosystem can offer useful insight (and benefit from this).
I think this is actually really important, and it’s not something that I think PIBBSS does very well currently. One thing I would note is that, for reasons sketched above, I think it’s important that the AI safety ecosystem aren’t the only people interacting with this. One thing that’s holding things back here is, in my view, a venue for this kind of research whose scope is, primarily, the basic research. This is not to say that the relevance and impact for safety shouldn’t be a primary concern in research prioritisation—I think it very much should be—but I do think this can be done in a way that is more compatible with academic norms (at least those academic norms that are worth upholding).