Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. Additionally, they already have a larger budget than any other organisation (except perhaps FHI) and a large amount of reserves.
Despite FHI producing very high quality research, GPI having a lot of promising papers in the pipeline, and both having highly qualified and value-aligned researchers, the requirement to pre-fund researchers’ entire contract significantly increases the effective cost of funding research there. On the other hand, hiring people in the bay area isn’t cheap either.
This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year.
I think of CSER and GCRI as being relatively comparable organisations, as 1) they both work on a variety of existential risks and 2) both primarily produce strategy pieces. In this comparison I think GCRI looks significantly better; it is not clear their total output, all things considered, is less than CSER’s, but they have done so on a dramatically smaller budget. As such I will be donating some money to GCRI again this year.
ANU, Deepmind and OpenAI have all done good work but I don’t think it is viable for (relatively) small individual donors to meaningfully support their work.
Ought seems like a very valuable project, and I am torn on donating, but I think their need for additional funding is slightly less than some other groups.
AI Impacts is in many ways in a similar position to GCRI, with the exception that GCRI is attempting to scale by hiring its part-time workers to full-time, while AI Impacts is scaling by hiring new people. The former is significantly lower risk, and AI Impacts seems to have enough money to try out the upsizing for 2019 anyway. As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019.
The Foundational Research Institute have done some very interesting work, but seem to be adequately funded, and I am somewhat more concerned about the danger of risky unilateral action here than with other organisations.
I haven’t had time to evaluate the Foresight Institute, which is a shame because at their small size marginal funding could be very valuable if they are in fact doing useful work. Similarly, Median and Convergence seem too new to really evaluate, though I wish them well.
The Future of Life institute grants for this year seem more valuable to me than the previous batch, on average. However, I prefer to directly evaluate where to donate, rather than outsourcing this decision.
I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. The current situation, with a binary employed/not-employed distinction, and upfront payment for uncertain output, seems suboptimal. I also hope to significantly reduce overhead (for everyone but me) by not having an application process or any requirements for grantees beyond having produced good work. This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
Rot13′s content, hidden using spoiler markup:
Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. Additionally, they already have a larger budget than any other organisation (except perhaps FHI) and a large amount of reserves.
Despite FHI producing very high quality research, GPI having a lot of promising papers in the pipeline, and both having highly qualified and value-aligned researchers, the requirement to pre-fund researchers’ entire contract significantly increases the effective cost of funding research there. On the other hand, hiring people in the bay area isn’t cheap either.
This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year.
I think of CSER and GCRI as being relatively comparable organisations, as 1) they both work on a variety of existential risks and 2) both primarily produce strategy pieces. In this comparison I think GCRI looks significantly better; it is not clear their total output, all things considered, is less than CSER’s, but they have done so on a dramatically smaller budget. As such I will be donating some money to GCRI again this year.
ANU, Deepmind and OpenAI have all done good work but I don’t think it is viable for (relatively) small individual donors to meaningfully support their work.
Ought seems like a very valuable project, and I am torn on donating, but I think their need for additional funding is slightly less than some other groups.
AI Impacts is in many ways in a similar position to GCRI, with the exception that GCRI is attempting to scale by hiring its part-time workers to full-time, while AI Impacts is scaling by hiring new people. The former is significantly lower risk, and AI Impacts seems to have enough money to try out the upsizing for 2019 anyway. As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019.
The Foundational Research Institute have done some very interesting work, but seem to be adequately funded, and I am somewhat more concerned about the danger of risky unilateral action here than with other organisations.
I haven’t had time to evaluate the Foresight Institute, which is a shame because at their small size marginal funding could be very valuable if they are in fact doing useful work. Similarly, Median and Convergence seem too new to really evaluate, though I wish them well.
The Future of Life institute grants for this year seem more valuable to me than the previous batch, on average. However, I prefer to directly evaluate where to donate, rather than outsourcing this decision.
I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. The current situation, with a binary employed/not-employed distinction, and upfront payment for uncertain output, seems suboptimal. I also hope to significantly reduce overhead (for everyone but me) by not having an application process or any requirements for grantees beyond having produced good work. This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.