I believe the working title is ‘Intelligence Rising’
Sean_o_h
This is super awesome. Thank you for doing this.
Johnson was perhaps below average in his application to his studies, but it would be a mistake to think he is/was a below average intelligence pupil.
I can imagine DM deciding that some very applied department is going to be discontinued, like healthcare, or something else kinda flashy.
With Mustafa Suleyman, the cofounder most focused on applied (and leading DeepMind Applied) leaving for google, this seems like quite a plausible prediction. So a refocusing on being a primarily research company with fewer applied staff (an area that can soak up a lot of staff) resulting in a 20% reduction of staff probably wouldn’t provide a lot of evidence (and is probably not what Robin had in mind). A reduction of research staff, on the other hand, would be very interesting.
(Cross-posted to the EA forum). (Disclosure: I am executive director of CSER) Thanks again for a wide-ranging and helpful review; this represents a huge undertaking of work and is a tremendous service to the community. For the purpose of completeness, I include below 14 additional publications authored or co-authored by CSER researchers for the relevant time period not covered above (and one that falls just outside but was not previously featured):
Global catastrophic risk:
Ó hÉigeartaigh. The State of Research in Existential Risk
Avin, Wintle, Weitzdorfer, O hEigeartaigh, Sutherland, Rees (all CSER). Classifying Global Catastrophic Risks
International governance and disaster governance:
Rhodes. Risks and Risk Management in Systems of International Governance.
Biorisk/bio-foresight:
Rhodes. Scientific freedom and responsibility in a biosecurity context.
Just missing the cutoff for this review but not included last year, so may be of interest is our bioengineering horizon-scan. (published November 2017). Wintle et al (incl Rhodes, O hEigeartaigh, Sutherland). Point of View: A transatlantic perspective on 20 emerging issues in biological engineering.
Biodiversity loss risk:
Amano (CSER), Szekely… & Sutherland. Successful conservation of global waterbird populations depends on effective governance (Nature publication)
CSER researchers as coauthors:
(Environment) Balmford, Amano (CSER) et al. The environmental costs and benefits of high-yield farming
(Intelligence/AI) Bhatagnar et al (incl Avin, O hEigeartaigh, Price): Mapping Intelligence: Requirements and Possibilities
(Disaster governance): Horhager and Weitzdorfer (CSER): From Natural Hazard to Man-Made Disaster: The Protection of Disaster Victims in China and Japan
(AI) Martinez-Plumed, Avin (CSER), Brundage, Dafoe, O hEigeartaigh (CSER), Hernandez-Orallo: Accounting for the Neglected Dimensions of AI Progress
(Foresight/expert elicitation) Hanea… & Wintle The Value of Performance Weights and Discussion in Aggregated Expert Judgments
(Intelligence) Logan, Avin et al (incl Adrian Currie): Uncovering the Neural Correlates of Behavioral and Cognitive Specialization
(Intelligence) Montgomery, Currie et al (incl Avin). Ingredients for Understanding Brain and Behavioral Evolution: Ecology, Phylogeny, and Mechanism
(Biodiversity) Baynham Herdt, Amano (CSER), Sutherland (CSER), Donald. Governance explains variation in national responses to the biodiversity crisis
(Biodiversity) Evans et al (incl Amano). Does governance play a role in the distribution of invasive alien species?
Outside of the scope of the review, we produced on request a number of policy briefs for the United Kingdom House of Lords on future AI impacts; horizon-scanning and foresight in AI; and AI safety and existential risk, as well as a policy brief on the bioengineering horizon scan. Reports/papers from our 2018 workshops (on emerging risks in nuclear security relating to cyber; nuclear error and terror; and epistemic security) and our 2018 conference will be released in 2019.
Thanks again!
It is possible they had timing issues whereby a substantial amount of work was done in earlier years but only released more recently. In any case they have published more in 2018 than in previous years.
(Disclosure: I am executive director of CSER) Yes. As I described in relation to last year’s review, CSER’s first postdoc started in autumn 2015, most started in mid 2016. First stages of research and papers began being completed throughout 2017, most papers then going to peer-reviewed journals. 2018 is more indicative of run-rate output, although 2019 will be higher.
Throughout 2016-2017, considerable CSER leadership time (mine in particular) has also gone on getting http://lcfi.ac.uk/ up and running, which will increase our output on AI safety/strategy/governance (although CFI also separately works on near term and non-AI safety-related topics).
Thank you for another detailed review! (response cross-posted to EA forum too)
And several more of us were at the workshop that worked on and endorsed this section at the Hague meeting—Anders Sandberg (FHI), Huw Price and myself (CSER). But regardless, the important thing is that a good section on long-term AI safety showed up in a major IEEE output—otherwise I’m confident it would have been terrible ;)
FLI’s anthony aguirre is centrally involved or leading, AFAIK.
Thanks for the initiative! I’ll be there Thursday through Saturday (plus Sunday) for symposia and workshops, if anyone would like to chat (Sean O hEigeartaigh, CSER).
New Leverhulme Centre on the Future of AI (developed at CSER with spokes led by Bostrom, Russell, Shanahan)
A quick reminder: our deadline closes a week from tomorrow (midday UK time) - so now would be a great time to apply if you were thinking of it, or to remind fellow researchers! Thanks so much, Seán.
A pre-emptive apology: I have a heavy deadline schedule over the next few weeks, so will answer questions when I can—please excuse any delays!
New positions and recent hires at the Centre for the Study of Existential Risk (Cambridge, UK)
“The easiest and the most trivial is to create a subagent, and transfer their resources and abilities to it (“create a subagent” is a generic way to get around most restriction ideas).” That is, after all, how we humans are planning to get around our self-modification limitations in creating AI ;)
In turn Nick, for his part, very regularly and explicitly credits the role that Eliezer’s work and discussions with Eliezer have played in his own research and thinking over the course of the FHI’s work on AI safety.
A few comments. I was working with Nick when he wrote that, and I fully endorsed it as advice at the time. Since then, the Xrisk funding situation—and number of locations at which you can do good work—has improved dramatically. it would be worth checking with him how he feels now. My view is that jobs are certainly still competitive though.
In that piece he wrote “I find the idea of doing technical research in AI or synthetic biology while thinking about x-risk/GCR promising.” I also strongly endorse this line of thinking. My view is that in addition to centres specifically doing Xrisk, having people who are Xrisk-motivated working in all the standard mainstream fields that are relevant to Xrisk would be a big win. Not just AI or synthetic biology (although obviously directly valuable here) - I’d include areas like governance, international relations, science & technology studies, and so on. There will come a point (in my view) when having these concerns diffusing across a range of fields and geographic locations will be more important than increasing the size of dedicated thought bubbles at e.g. Oxford.
“Do you guys want to share how pleased you were about the set of applicants you received for these jobs?” I can’t say too much about this, because hires not yet finalised, but yes, pleased. The hires we made are stellar. There were a number of people not hired who at most times I would have thought to be excellent, but for various reasons the panel didn’t think they were right at this time. You will understand if I can’t say more about this, (and my very sincere apologies to everyone I can’t give individual feedback to, carrying a v heavy workload at the moment w minimal support).
That said, I wouldn’t be willing to stand up and say x-risk reduction is not talented-limited, as I don’t think there’s enough data for that. Our field was large, and top talent was deep enough on this occasion, but could have been deeper. Both CSER and FHI have more hires coming up, so that will deplete the talent pool further.
Another consideration: I do feel that many of the most brilliant people the X-risk field needs are out there already, finishing their PhDs in relevant areas but not currently part of the field. I think organisations like ours need to make hard efforts to reach out to these people.
Recruitment strategies: Reaching out through our advisors’ networks. Standard academic jobs hiring boards, emails to the top 10-20 departments in the most relevant fields. Getting in touch with members of different x-risk organisations and asking them to spread the word through their networks. Posting online in various x-risk/ea-related places. I also got in touch with a large range of the smaller, more specific centres (and authors) producing the best work outside of the x-risk community—e.g. in risk, foresight, horizon-scanning, security, international relations, DURC, STS and so on, asked them for recommendations and to distribute it among their network. And I iterated a few times through the contacts I made this way. E.g. I got in touch with Tetlock and others on expertise elicitation & aggregation, who put me in touch with people at the Good Judgement Project and others, who put me in touch with other centres. Eventually got some very good applicants in this space, including one from Australia’s Centre of Excellence for Biosecurity Risk Analysis, whose director I was put in touch with through this method but hadn’t heard of previously.
This was all v labour intensive, and I expect I won’t have time to recruit so heavily in future. But I hope going forward we will have a bigger academic footprint. I also had tremendous help from a number of people in the Xrisk community, including Ryan Carey, Seth Baum, FHI folks, to whom I’m v grateful. Also, a huge thanks to Scott Alexander for plugging our positiosn on his excellent blog!
I think our top 10 came pretty evenly split between “xrisk community”, “standard academic jobs posting boards/university department emails” and “outreach to more specific non-xrisk networks”. I think all our hires are new introductions to existential risk, which is encouraging.
Re: communicating internally, I think we’re doing pretty well. E.g. on recruitment, I’ve been communicating pretty closely with FHI as they have positions to fill too at present and coming up, and will recommend to some excellent people who applied to us to apply to them. (note that this isn’t always just quality—we have both had excellent applicants who weren’t quite a fit at this time at one, but would a top prospect at the other, going in both directions).
More generally, internal communication within x-risk has been good in my view—project managers and researchers at FHI, MIRI and other orgs make a point of regular meetings with the other organisations, and this has made up a decent chunk of my time too over the past couple of years and has been very important, although I’m likely to have to cut back personally for a couple of years due to increasing cambridge-internal workload (early days of a new, unusual centre in an old traditional university). I expect our researchers will play an important role in communicating between centres however.
One further apology: I don’t expect to have much time to comment/post on LW going forward, so I apologise that I won’t always be able to reply to qs like this. But I’m very grateful for all the useful support, advice and research input I’ve received from LW members over the years.
9 single author research papers is extremely impressive! Well done.
This does seem quite hazardous, though. If an emergency happened at 3am, I’m pretty sure I’d want my phone easily available and usable.
I was going to say this too, it’s a good point. Potential fix: have a cheap non-smartphone on standby at home.
Leplen, thank you for your comments, and for taking the time to articulate a number of the challenges associated with interdisciplinary research – and in particular, setting up a new interdisciplinary research centre in a subfield (global catastrophic and existential risk) that is in itself quite young and still taking shape. While we don’t have definitive answers to everything you raise, they are things we are thinking a lot about, and seeking a lot of advice on. While there will be some trial and error, given the quality and pooled experience of the academics most involved I’m confident that things will work out well.
Firstly, re: your first post, a few words from our Academic Director and co-founder Huw Price (who doesn’t have a LW account).
“Thanks for your questions! What the three people mentioned have in common is that they are all interested in applying their expertise to the challenges of managing extreme risks arising from new technologies. That’s CSER’s goal, and we’re looking for brilliant early-career researchers interested in working on these issues, with their own ideas about how their skills are relevant. We don’t want to try to list all the possible fields these people might come from, because we know that some of you will have ideas we haven’t thought of yet. The study of technological xrisk is a new interdisciplinary subfield, still taking shape. We’re looking for brilliant and committed people, to help us design it.
We expect that the people we appoint will publish mainly in the journals in their home field, thus helping to raise awareness of these important issues within those fields – but there will also be opportunities for inter-field collaborations, too, so you may find yourself publishing in places you wouldn’t have expected. We anticipate that most of our postdocs will go on to distinguished careers in their home fields, too, though hopefully in a way which maintains their links with the interdisciplinary xrisk community. We anticipate that there will also be some opportunities for more specialised career paths, as the field and funding expand. “
A few words of my own to expand: As you and Ryan have discussed, we have a number of specific, quite well-defined subprojects that we have secured grant funding for (two more will be announced later on). But we are also in the lucky position of having some more unconstrained postdoctoral position funding – and now, as Huw says, seems like an opportune time to see what people, and ideas, are out there, and what we haven’t considered. Future calls are likely to be a lot more constrained – as the centre’s ongoing projects and goals get more locked in, and as we need to hire for very specific people to work on specific grants.
Some disciplines seem very obviously relevant to me – e.g. if the existential risk community is to do work on AI, synthetic biology, pandemic risk, geoengineering, it needs people with qualifications in CS/math, biology/informatics, epidemiology, climate modelling/physics. Disciplines relevant to risk modelling and assessment seem obvious, as does science & technology studies, philosophy of science, and policy/governance. In aiming to develop implementable strategies for safe technology development and x-risk reduction, economics, law and international relations seem like fields that might produce people with necessary insights. Some or a little less clear-cut: insights into horizon-scanning and foresight/technological prediction could come from a range of areas. And I’m sure there are disciplines we are simply missing. Obviously we can’t hire people with all of these backgrounds now (although, over the course of the centre we would aim to have all these disciplines pass through and make their mark). But we don’t necessarily need to; we have enough strong academic connections that we will usually be able to provide relevant advisors and collaborators to complement what we have ‘in house’. E.g. if a policy/law-background person seems like an excellent fit for biosecurity work or biotech policy/regulation, we would aim to make sure there’s both a senior person in policy/law to provide guidance, and collaborators in biology to make sure the science is there. And vice versa.
With all that said, from my time at FHI and CSER, a lot of the biggest progress and ideas have come from people whose backgrounds might not have immediately seemed obvious to x-risk, at least to me – cosmologists, philosophers, neuroscientists. We want to make sure we get the people, and the ideas, wherever they may be.
With regards to your second post:
You again raise good questions. For the people who don’t fall squarely into the ‘shovel-ready’ projects (although the majority of our hires this year will), I expect we will set up senior support structures on a case by case basis depending on what the project/person needs.
One model is co-supervision or supervisor+advisor. For one example, last year I worked with a CSER postdoctoral candidate on a grant proposal for a postdoc project that would have taken in both technical modelling/assessment of extreme risks from sulphate aerosol geoengineering, but where the postdoc also wanted to explore broader socio/policy challenges. We felt we had the in-house expertise for the latter but not the former. We set up an arrangement whereby he would be advised by a climate specialist in this area, and spend a period of the postdoc with the specialist’s group in Germany. (The proposal was unfortunately unsuccessful with the granting body.)
As we expect AI to be a continuing focus, we’re developing good connections with AI specialist groups in academia and industry in Cambridge, and would similarly expect that a postdoc with a CS background might split their time between CSER’s interdisciplinary group and a technical group working in this area and interested in long-term safe/responsible AI development. The plan is to develop similar relations in bio and other key areas. If we feel like we’re really not set up to support someone as seems necessary and can’t figure out how to get around that, then yes, that may be a good reason not to proceed at a given time. That said, during my time at FHI, a lot of good research has been done without these kinds of setups – and incidentally I don’t think being at FHI has ever harmed anyone’s long-term career prospects—so they won’t always be necessary.
And overly-broad job listings are par for the course, but before I personally would want to put together a 3 page project proposal or hunt down a 10 page writing sample relevant or even comprehensible to people outside of my field, I’d like to have some sense of whether anyone would even read them or whether they’d just be confused as to why I applied.
An offer: if you (or anyone else) have these kinds of concerns and wish to send me something short (say 1/3-1/2 page proposal/info about yourself) before investing the effort in a full application, I’ll be happy to read and say whether it’s worth applying (warning: it may take me until weekend on any given week).
(disclaimer: one of the coauthors) Also, none of the linked comments by the coauthors actually praise the paper as good and thoughtful? They all say the same thing, which is “pleased to have contributed” and “nice comment about the lead author” (a fairly early-career scholar who did lots and lots of work and was good to work with). I called it “timely”, as the topic of open-sourcing was very much live at the time.
(FWIW, I think this post has valid criticism re: the quality of the biorisk literature cited and the strength with which the case was conveyed; and I think this kind of criticism is very valuable and I’m glad to see it).