I thought “making math progress and trying to get mathematicians in academia interested in these kinds of problems” was intended to be preparation
Yes, I believe that is indeed the intention, but it’s worth noting that the things that MIRI’s currently doing really allow them to pursue either strategy in the future. So if they give up on the “small FAI team” strategy because it turns out to be too hard, they may still pursue the “big academic research” strategy, based on the information collected at this and other steps.
If “small FAI team” is not a good idea, then I don’t see what purpose “making math progress and trying to get mathematicians in academia interested in these kinds of problems” serves, or how experimenting with it is useful.
“Small FAI team” might turn out to be a bad idea because the problem is too difficult for a small team to solve alone. In that case, it may be useful to actually offload most of the problems to a broader academic community. Of course, this may or may not be safe, but there may come a time when it turns out that it is the least risky alternative.
I think “big academic research” is almost certainly not safe, for reasons similar to my argument to Paul here. There are people who do not care about AI safety due to short planning horizons or because they think they have simple, easy to transmit values, and will deploy the results of such research before the AI safety work is complete.
Of course, this may or may not be safe, but there may come a time when it turns out that it is the least risky alternative.
This would be a fine argument if there weren’t immediate downsides to what MIRI is currently doing, namely shortening AI timelines and making it harder to create a singleton (or get significant human intelligence enhancement, which could help somewhat in the absence of a singleton) before AGI work starts ramping up.
immediate downsides to what MIRI is currently doing, namely shortening AI timelines
To be clear, based on what I’ve seen you write elsewhere, you think they are shortening AI timelines because the mathematical work on reflection and decision theory would be useful for AIs in general, and are not specific to the problem of friendliness. Is that right?
This isn’t obvious to me. In particular, the reflection work seems much more relevant to creating stable goal structures than to engineering intelligence / optimization power.
Yes, I believe that is indeed the intention, but it’s worth noting that the things that MIRI’s currently doing really allow them to pursue either strategy in the future. So if they give up on the “small FAI team” strategy because it turns out to be too hard, they may still pursue the “big academic research” strategy, based on the information collected at this and other steps.
“Small FAI team” might turn out to be a bad idea because the problem is too difficult for a small team to solve alone. In that case, it may be useful to actually offload most of the problems to a broader academic community. Of course, this may or may not be safe, but there may come a time when it turns out that it is the least risky alternative.
I think “big academic research” is almost certainly not safe, for reasons similar to my argument to Paul here. There are people who do not care about AI safety due to short planning horizons or because they think they have simple, easy to transmit values, and will deploy the results of such research before the AI safety work is complete.
This would be a fine argument if there weren’t immediate downsides to what MIRI is currently doing, namely shortening AI timelines and making it harder to create a singleton (or get significant human intelligence enhancement, which could help somewhat in the absence of a singleton) before AGI work starts ramping up.
To be clear, based on what I’ve seen you write elsewhere, you think they are shortening AI timelines because the mathematical work on reflection and decision theory would be useful for AIs in general, and are not specific to the problem of friendliness. Is that right?
This isn’t obvious to me. In particular, the reflection work seems much more relevant to creating stable goal structures than to engineering intelligence / optimization power.