Projects:
(i) Ethics of extreme technological risk (working with Professor Partha Dasgupta) This subproject aims to examine the limitations of standard cost-benefit analysis (CBA) as a means of assessing the importance of mitigating extreme technological risk (ETR); to develop a version of CBA more suitable to this context; and derive conclusions about the importance of mitigating ETR compared to other global priorities. Relevant disciplines include: Philosophy (especially moral philosophy, applied ethics, and formal decision theory) and Economics (e.g., the economics of sustainability, the theory of future discounting).
(ii) Horizon-scanning and foresight for extreme technological risk (working with Professor William Sutherland) Successful management of ETR is likely to require early detection. This subproject aims to optimise the horizon-scanning and foresight techniques available for this task, and to understandthe similarities and differences between the case of ETR and other horizon-scanning applications. Relevant disciplines include: Zoology and Ecology, Conservation Studies, Science and Technology Studies, Psychology.
(iii) Responsible innovation and extreme technological risk (working with Dr Robert Doubleday and Professor Martin Rees)
This subproject asks what can be done to encourage risk-awareness and societal responsibility, without discouraging innovation, within the communities developing future technologies with transformative potential. Relevant disciplines include: Science and Technology Studies, Geography, Philosophy of Science, plus relevant technological fields (e.g., AI, Virology, Synthetic biology).
It’s sort of not that useful though. This is a description of the “shovel-ready” projects and those are actually pretty straight-forward. If you fit into one of those categories, you’d basically be under a single person with a well-defined discipline and you can get a pretty good sense of who you’d be working for by scanning a half-dozen paper abstracts if you’re not already familiar with them. There’s a decent chance you’re actually funded directly out of the individual professor’s research grant. It’s pretty business as usual.
But being a post-doc for an interdisciplinary center can be a lot more confusing. If the center has someone who is an expert in your field then they’re semi-qualified to supervise your work and they sort of become your boss by default. If there isn’t an expert in your field, the standard academic mentor-apprentice model starts to break down and it’s not always clear what will replace it. Sometimes you become predominantly a lackey/domain expert/flex researcher for existing projects. Sometimes the center recruits someone to mentor you. Some you are expected to develop a novel focus for the group. And if the group has been around for a while you can estimate a lot of these answers just from publication history, but with something brand new it’s much harder.
And this is a stupid hard problem to even describe. It isn’t clear what department “All the things that might possibly go wrong that would make us all die” belongs in. On some level I understand why the “Specialist knowledge and skills” are super vague general things like “good level of scientific literacy” and “strong quantitative reasoning skills.” And overly-broad job listings are par for the course, but before I personally would want to put together a 3 page project proposal or hunt down a 10 page writing sample relevant or even comprehensible to people outside of my field, I’d like to have some sense of whether anyone would even read them or whether they’d just be confused as to why I applied.
I’m only assisting on CSER on a casual basis but here are some rough notes that should at least be helpful.
As you point out, the job description is general because the enterprise is interdisciplinary and there are a lot of ways that people could contribute to it. Projects apart from those specified would be significantly designed to match the available personelle and their expertise. If someone wanted to contribute to some specific technology, such as nanotech, that you’ve previously written about on this forum, and had a credible background that was relevant to that risk, then we wouldn’t be left wondering why you were applying. Still, I agree that we should make future job postings more specific, and expect that will do this.
In relation to who would be available to supervise applicants in areas other than those advertised, it can be helpful to look at CSER’s Cambridge-based advisory. In policy, for example, there is not only Robert Doubleday from the Centre for Science and Policy but also others who are advising, so this would obviously be a strong area. Another example is that Huw Price, who is a founder, is significantly interested in the application of decision theory to AI safety, and so opportunities may arise in that area over time.
It doesn’t seem immediately likely that domain experts would be used by passing around existing projects because CSER is actively interested in performing thorrough and ongoing analysis of relevant risks, and how to promote the safe development of relevant technologies.
If you have a question about whether CSER is interested in performing research and has capabilities for supervision of X area of research,
This should help a little:
further Information about the positions
It’s sort of not that useful though. This is a description of the “shovel-ready” projects and those are actually pretty straight-forward. If you fit into one of those categories, you’d basically be under a single person with a well-defined discipline and you can get a pretty good sense of who you’d be working for by scanning a half-dozen paper abstracts if you’re not already familiar with them. There’s a decent chance you’re actually funded directly out of the individual professor’s research grant. It’s pretty business as usual.
But being a post-doc for an interdisciplinary center can be a lot more confusing. If the center has someone who is an expert in your field then they’re semi-qualified to supervise your work and they sort of become your boss by default. If there isn’t an expert in your field, the standard academic mentor-apprentice model starts to break down and it’s not always clear what will replace it. Sometimes you become predominantly a lackey/domain expert/flex researcher for existing projects. Sometimes the center recruits someone to mentor you. Some you are expected to develop a novel focus for the group. And if the group has been around for a while you can estimate a lot of these answers just from publication history, but with something brand new it’s much harder.
And this is a stupid hard problem to even describe. It isn’t clear what department “All the things that might possibly go wrong that would make us all die” belongs in. On some level I understand why the “Specialist knowledge and skills” are super vague general things like “good level of scientific literacy” and “strong quantitative reasoning skills.” And overly-broad job listings are par for the course, but before I personally would want to put together a 3 page project proposal or hunt down a 10 page writing sample relevant or even comprehensible to people outside of my field, I’d like to have some sense of whether anyone would even read them or whether they’d just be confused as to why I applied.
Hi Leplen,
I’m only assisting on CSER on a casual basis but here are some rough notes that should at least be helpful.
As you point out, the job description is general because the enterprise is interdisciplinary and there are a lot of ways that people could contribute to it. Projects apart from those specified would be significantly designed to match the available personelle and their expertise. If someone wanted to contribute to some specific technology, such as nanotech, that you’ve previously written about on this forum, and had a credible background that was relevant to that risk, then we wouldn’t be left wondering why you were applying. Still, I agree that we should make future job postings more specific, and expect that will do this.
In relation to who would be available to supervise applicants in areas other than those advertised, it can be helpful to look at CSER’s Cambridge-based advisory. In policy, for example, there is not only Robert Doubleday from the Centre for Science and Policy but also others who are advising, so this would obviously be a strong area. Another example is that Huw Price, who is a founder, is significantly interested in the application of decision theory to AI safety, and so opportunities may arise in that area over time.
It doesn’t seem immediately likely that domain experts would be used by passing around existing projects because CSER is actively interested in performing thorrough and ongoing analysis of relevant risks, and how to promote the safe development of relevant technologies.
If you have a question about whether CSER is interested in performing research and has capabilities for supervision of X area of research,