Leplen, thank you for your comments, and for taking the time to articulate a number of the challenges associated with interdisciplinary research – and in particular, setting up a new interdisciplinary research centre in a subfield (global catastrophic and existential risk) that is in itself quite young and still taking shape. While we don’t have definitive answers to everything you raise, they are things we are thinking a lot about, and seeking a lot of advice on. While there will be some trial and error, given the quality and pooled experience of the academics most involved I’m confident that things will work out well.
Firstly, re: your first post, a few words from our Academic Director and co-founder Huw Price (who doesn’t have a LW account).
“Thanks for your questions! What the three people mentioned have in common is that they are all interested in applying their expertise to the challenges of managing extreme risks arising from new technologies. That’s CSER’s goal, and we’re looking for brilliant early-career researchers interested in working on these issues, with their own ideas about how their skills are relevant. We don’t want to try to list all the possible fields these people might come from, because we know that some of you will have ideas we haven’t thought of yet. The study of technological xrisk is a new interdisciplinary subfield, still taking shape. We’re looking for brilliant and committed people, to help us design it.
We expect that the people we appoint will publish mainly in the journals in their home field, thus helping to raise awareness of these important issues within those fields – but there will also be opportunities for inter-field collaborations, too, so you may find yourself publishing in places you wouldn’t have expected. We anticipate that most of our postdocs will go on to distinguished careers in their home fields, too, though hopefully in a way which maintains their links with the interdisciplinary xrisk community. We anticipate that there will also be some opportunities for more specialised career paths, as the field and funding expand. “
A few words of my own to expand: As you and Ryan have discussed, we have a number of specific, quite well-defined subprojects that we have secured grant funding for (two more will be announced later on). But we are also in the lucky position of having some more unconstrained postdoctoral position funding – and now, as Huw says, seems like an opportune time to see what people, and ideas, are out there, and what we haven’t considered. Future calls are likely to be a lot more constrained – as the centre’s ongoing projects and goals get more locked in, and as we need to hire for very specific people to work on specific grants.
Some disciplines seem very obviously relevant to me – e.g. if the existential risk community is to do work on AI, synthetic biology, pandemic risk, geoengineering, it needs people with qualifications in CS/math, biology/informatics, epidemiology, climate modelling/physics. Disciplines relevant to risk modelling and assessment seem obvious, as does science & technology studies, philosophy of science, and policy/governance. In aiming to develop implementable strategies for safe technology development and x-risk reduction, economics, law and international relations seem like fields that might produce people with necessary insights. Some or a little less clear-cut: insights into horizon-scanning and foresight/technological prediction could come from a range of areas. And I’m sure there are disciplines we are simply missing.
Obviously we can’t hire people with all of these backgrounds now (although, over the course of the centre we would aim to have all these disciplines pass through and make their mark). But we don’t necessarily need to; we have enough strong academic connections that we will usually be able to provide relevant advisors and collaborators to complement what we have ‘in house’. E.g. if a policy/law-background person seems like an excellent fit for biosecurity work or biotech policy/regulation, we would aim to make sure there’s both a senior person in policy/law to provide guidance, and collaborators in biology to make sure the science is there. And vice versa.
With all that said, from my time at FHI and CSER, a lot of the biggest progress and ideas have come from people whose backgrounds might not have immediately seemed obvious to x-risk, at least to me – cosmologists, philosophers, neuroscientists. We want to make sure we get the people, and the ideas, wherever they may be.
With regards to your second post:
You again raise good questions. For the people who don’t fall squarely into the ‘shovel-ready’ projects (although the majority of our hires this year will), I expect we will set up senior support structures on a case by case basis depending on what the project/person needs.
One model is co-supervision or supervisor+advisor. For one example, last year I worked with a CSER postdoctoral candidate on a grant proposal for a postdoc project that would have taken in both technical modelling/assessment of extreme risks from sulphate aerosol geoengineering, but where the postdoc also wanted to explore broader socio/policy challenges. We felt we had the in-house expertise for the latter but not the former. We set up an arrangement whereby he would be advised by a climate specialist in this area, and spend a period of the postdoc with the specialist’s group in Germany. (The proposal was unfortunately unsuccessful with the granting body.)
As we expect AI to be a continuing focus, we’re developing good connections with AI specialist groups in academia and industry in Cambridge, and would similarly expect that a postdoc with a CS background might split their time between CSER’s interdisciplinary group and a technical group working in this area and interested in long-term safe/responsible AI development. The plan is to develop similar relations in bio and other key areas. If we feel like we’re really not set up to support someone as seems necessary and can’t figure out how to get around that, then yes, that may be a good reason not to proceed at a given time. That said, during my time at FHI, a lot of good research has been done without these kinds of setups – and incidentally I don’t think being at FHI has ever harmed anyone’s long-term career prospects—so they won’t always be necessary.
And overly-broad job listings are par for the course, but before I personally would want to put together a 3 page project proposal or hunt down a 10 page writing sample relevant or even comprehensible to people outside of my field, I’d like to have some sense of whether anyone would even read them or whether they’d just be confused as to why I applied.
An offer: if you (or anyone else) have these kinds of concerns and wish to send me something short (say 1/3-1/2 page proposal/info about yourself) before investing the effort in a full application, I’ll be happy to read and say whether it’s worth applying (warning: it may take me until weekend on any given week).
Thanks so much for your thoughtful response. This clarifies the position dramatically and makes it sound much more attractive. If I have any further questions related to my application specifically, I’ll certainly let you know.
Leplen, thank you for your comments, and for taking the time to articulate a number of the challenges associated with interdisciplinary research – and in particular, setting up a new interdisciplinary research centre in a subfield (global catastrophic and existential risk) that is in itself quite young and still taking shape. While we don’t have definitive answers to everything you raise, they are things we are thinking a lot about, and seeking a lot of advice on. While there will be some trial and error, given the quality and pooled experience of the academics most involved I’m confident that things will work out well.
Firstly, re: your first post, a few words from our Academic Director and co-founder Huw Price (who doesn’t have a LW account).
“Thanks for your questions! What the three people mentioned have in common is that they are all interested in applying their expertise to the challenges of managing extreme risks arising from new technologies. That’s CSER’s goal, and we’re looking for brilliant early-career researchers interested in working on these issues, with their own ideas about how their skills are relevant. We don’t want to try to list all the possible fields these people might come from, because we know that some of you will have ideas we haven’t thought of yet. The study of technological xrisk is a new interdisciplinary subfield, still taking shape. We’re looking for brilliant and committed people, to help us design it.
We expect that the people we appoint will publish mainly in the journals in their home field, thus helping to raise awareness of these important issues within those fields – but there will also be opportunities for inter-field collaborations, too, so you may find yourself publishing in places you wouldn’t have expected. We anticipate that most of our postdocs will go on to distinguished careers in their home fields, too, though hopefully in a way which maintains their links with the interdisciplinary xrisk community. We anticipate that there will also be some opportunities for more specialised career paths, as the field and funding expand. “
A few words of my own to expand: As you and Ryan have discussed, we have a number of specific, quite well-defined subprojects that we have secured grant funding for (two more will be announced later on). But we are also in the lucky position of having some more unconstrained postdoctoral position funding – and now, as Huw says, seems like an opportune time to see what people, and ideas, are out there, and what we haven’t considered. Future calls are likely to be a lot more constrained – as the centre’s ongoing projects and goals get more locked in, and as we need to hire for very specific people to work on specific grants.
Some disciplines seem very obviously relevant to me – e.g. if the existential risk community is to do work on AI, synthetic biology, pandemic risk, geoengineering, it needs people with qualifications in CS/math, biology/informatics, epidemiology, climate modelling/physics. Disciplines relevant to risk modelling and assessment seem obvious, as does science & technology studies, philosophy of science, and policy/governance. In aiming to develop implementable strategies for safe technology development and x-risk reduction, economics, law and international relations seem like fields that might produce people with necessary insights. Some or a little less clear-cut: insights into horizon-scanning and foresight/technological prediction could come from a range of areas. And I’m sure there are disciplines we are simply missing. Obviously we can’t hire people with all of these backgrounds now (although, over the course of the centre we would aim to have all these disciplines pass through and make their mark). But we don’t necessarily need to; we have enough strong academic connections that we will usually be able to provide relevant advisors and collaborators to complement what we have ‘in house’. E.g. if a policy/law-background person seems like an excellent fit for biosecurity work or biotech policy/regulation, we would aim to make sure there’s both a senior person in policy/law to provide guidance, and collaborators in biology to make sure the science is there. And vice versa.
With all that said, from my time at FHI and CSER, a lot of the biggest progress and ideas have come from people whose backgrounds might not have immediately seemed obvious to x-risk, at least to me – cosmologists, philosophers, neuroscientists. We want to make sure we get the people, and the ideas, wherever they may be.
With regards to your second post:
You again raise good questions. For the people who don’t fall squarely into the ‘shovel-ready’ projects (although the majority of our hires this year will), I expect we will set up senior support structures on a case by case basis depending on what the project/person needs.
One model is co-supervision or supervisor+advisor. For one example, last year I worked with a CSER postdoctoral candidate on a grant proposal for a postdoc project that would have taken in both technical modelling/assessment of extreme risks from sulphate aerosol geoengineering, but where the postdoc also wanted to explore broader socio/policy challenges. We felt we had the in-house expertise for the latter but not the former. We set up an arrangement whereby he would be advised by a climate specialist in this area, and spend a period of the postdoc with the specialist’s group in Germany. (The proposal was unfortunately unsuccessful with the granting body.)
As we expect AI to be a continuing focus, we’re developing good connections with AI specialist groups in academia and industry in Cambridge, and would similarly expect that a postdoc with a CS background might split their time between CSER’s interdisciplinary group and a technical group working in this area and interested in long-term safe/responsible AI development. The plan is to develop similar relations in bio and other key areas. If we feel like we’re really not set up to support someone as seems necessary and can’t figure out how to get around that, then yes, that may be a good reason not to proceed at a given time. That said, during my time at FHI, a lot of good research has been done without these kinds of setups – and incidentally I don’t think being at FHI has ever harmed anyone’s long-term career prospects—so they won’t always be necessary.
An offer: if you (or anyone else) have these kinds of concerns and wish to send me something short (say 1/3-1/2 page proposal/info about yourself) before investing the effort in a full application, I’ll be happy to read and say whether it’s worth applying (warning: it may take me until weekend on any given week).
Thanks so much for your thoughtful response. This clarifies the position dramatically and makes it sound much more attractive. If I have any further questions related to my application specifically, I’ll certainly let you know.