The largest issue with this approach/view is that it’s not addressing the distinction between:
Increased resources for things with “AI Safety” written on them.
Increased resources for approaches that stand a chance of working.
The problem is important in large measure due to its difficulty: that we need to hit a very small target we don’t yet understand. By default, resources allocated to anything labelled “AI safety” will not be aimed at that target.
If things are politicised, it’s a safe bet they won’t be aimed at the target; politicised issues get money thrown in their general direction, but that’s not about actually solving the problem. There’s a big difference between [more money helps, all else being equal], and claiming [action x gets more money, therefore it helps]. Politicisation would have many downsides.
Likewise, even if we get more attention/money… there are potential signal-to-noise issues. Suppose that there are 20 people involved in grant allocation with enough technical understanding to pick out promising projects. Consider two cases:
They receive 200 grant applications, 20 of which are promising.
They receive 2000 grant applications, 40 of which are promising.
In case (2) there are more promising projects, but it’s not clear that grant evaluators will find more promising projects, since the signal to noise ratio will have dropped so much.
The obvious answer is to train more grant evaluators to the point where they have the necessary expertise—but this is a slow process that’s (currently) difficult to scale (though people are working on that).
You also seem to be cherry-picking the upside possibilities from increased awareness: yes, some people may start to work on or advocate for AI safety (and some small proportion of those for some useful understanding of “AI safety”). However, some people may also:
Hear about AI safety, realise that AGI is a big deal but not buy the safety arguments, and start working on AGI.
This is not a hypothetical situation, or something that only happens to people without much ability: if John Carmack can get this badly wrong, where are you getting your confidence that most people won’t?
Realise that AGI is a big deal, think that the major issues are misuse and/or ethics, and make poor decisions on that basis.
Make sure weget AGI before them...
Put in regulations that focus the ‘safety’ resources of AI companies on ticking meaningless boxes that do nothing to mitigate x-risk. (though I’d guess the default situation looks largely like this anyway)
We need to argue that it’s net positive, not simply that there would be some positive outcomes (I don’t think anyone would argue with that).
Again, I do think that there’s some communication strategy we should be using that beats the status-quo. However, it needs to be analysed carefully, and carried out carefully—with adjustment based on empirical feedback where possible. (my guess is that the best approaches would be highly targeted—not that this says much at all)
The largest issue with this approach/view is that it’s not addressing the distinction between:
Increased resources for things with “AI Safety” written on them.
Increased resources for approaches that stand a chance of working.
The problem is important in large measure due to its difficulty: that we need to hit a very small target we don’t yet understand. By default, resources allocated to anything labelled “AI safety” will not be aimed at that target.
If things are politicised, it’s a safe bet they won’t be aimed at the target; politicised issues get money thrown in their general direction, but that’s not about actually solving the problem. There’s a big difference between [more money helps, all else being equal], and claiming [action x gets more money, therefore it helps]. Politicisation would have many downsides.
Likewise, even if we get more attention/money… there are potential signal-to-noise issues. Suppose that there are 20 people involved in grant allocation with enough technical understanding to pick out promising projects.
Consider two cases:
They receive 200 grant applications, 20 of which are promising.
They receive 2000 grant applications, 40 of which are promising.
In case (2) there are more promising projects, but it’s not clear that grant evaluators will find more promising projects, since the signal to noise ratio will have dropped so much.
The obvious answer is to train more grant evaluators to the point where they have the necessary expertise—but this is a slow process that’s (currently) difficult to scale (though people are working on that).
You also seem to be cherry-picking the upside possibilities from increased awareness: yes, some people may start to work on or advocate for AI safety (and some small proportion of those for some useful understanding of “AI safety”).
However, some people may also:
Hear about AI safety, realise that AGI is a big deal but not buy the safety arguments, and start working on AGI.
This is not a hypothetical situation, or something that only happens to people without much ability: if John Carmack can get this badly wrong, where are you getting your confidence that most people won’t?
Realise that AGI is a big deal, think that the major issues are misuse and/or ethics, and make poor decisions on that basis.
Make sure we get AGI before them...
Put in regulations that focus the ‘safety’ resources of AI companies on ticking meaningless boxes that do nothing to mitigate x-risk. (though I’d guess the default situation looks largely like this anyway)
We need to argue that it’s net positive, not simply that there would be some positive outcomes (I don’t think anyone would argue with that).
Again, I do think that there’s some communication strategy we should be using that beats the status-quo. However, it needs to be analysed carefully, and carried out carefully—with adjustment based on empirical feedback where possible. (my guess is that the best approaches would be highly targeted—not that this says much at all)