This is a good question. To some extent I didn’t want to take a position on exactly which work is appropriate for this, as that’s independent of the rest of the analysis (although obviously feeds into model parameter estimates).
Something which would definitely help would be just to systematically review what might be useful for AI-soon outcomes.
Possibilities include: working to study the architecture of the more plausible candidates for producing AI; design work on containment mechanisms; producing high-quality data sets of ‘human values’ (in case value-learning is easy). I think those could all turn out to be useless ex post, but they may still be worth trying more for the possibility that they are useful.
There may also be useful lines which are already being pursued to a serious degree as part of cybersecurity.
One application of this might be for the FLI, and where they decide to grant the money they’ve received from Elon Musk. In addition to other considerations, it seems the correct conclusion from your paper would be not to underestimate the value of funding research aimed at AI-soon scenarios, as well as fund it because it could create a research environment that makes a greater quantity and quality of research on even AI-later scenarios. Whatever ratio of funding for either scenario they decide works isn’t as useless if nobody can discern what counts as AI-soon vs. AI-later research.
This is a good question. To some extent I didn’t want to take a position on exactly which work is appropriate for this, as that’s independent of the rest of the analysis (although obviously feeds into model parameter estimates).
Something which would definitely help would be just to systematically review what might be useful for AI-soon outcomes.
Possibilities include: working to study the architecture of the more plausible candidates for producing AI; design work on containment mechanisms; producing high-quality data sets of ‘human values’ (in case value-learning is easy). I think those could all turn out to be useless ex post, but they may still be worth trying more for the possibility that they are useful.
There may also be useful lines which are already being pursued to a serious degree as part of cybersecurity.
One application of this might be for the FLI, and where they decide to grant the money they’ve received from Elon Musk. In addition to other considerations, it seems the correct conclusion from your paper would be not to underestimate the value of funding research aimed at AI-soon scenarios, as well as fund it because it could create a research environment that makes a greater quantity and quality of research on even AI-later scenarios. Whatever ratio of funding for either scenario they decide works isn’t as useless if nobody can discern what counts as AI-soon vs. AI-later research.