Interesting line of thought. I don’t know who and how, but I still think we should already think about if it would be a good idea in principle.
Can I restate your idea as ‘we have a certain amount of convinced manpower, we should use it for the best purpose, which is AI safety’? I like the way of thinking, but I still think we should use some of them for looking into postponement. Arguments:
- The vast majority of people is unable to contribute meaningfully to AI safety research. Of course all these people could theoretically do whatever makes most money and then donate to AI safety research. But most will not do that in practice. I think many of these people could be used for the much more generic task of convincing others about AI risks, and also arguing for postponement. As an example, I saw a project once with the goal of teaching children about AI safety which claimed they could not continue for lack of 5000$ of funding. I think there’s a vast sea of resource-constrained possibility out there once we make the decision that telling everyone about AI risk is officially a good idea.
- Postponement weirdly seems to be a neglected topic within the AI safety community (for dislike of regulation, I guess), but also outside the community (for lack of AI risk insight). I think it’s a lot more neglected at this point than technical AI safety, which is perhaps also niche, but does have its own institutes already looking at it. Since it looks important and neglected, I think an hour spent on postponement is probably better spent than an hour on AI safety, perhaps unless you’re a talented AI safety researcher.
The idea that most people who can’t do technical AI alignment are therefore able to do effective work in public policy or motivating public change seems unsupported by anything you’ve said. And a key problem with “raising awareness” as a method of risk reduction is that it’s rife with infohazard concerns. For example, if we’re really worried about a country seizing a decisive strategic advantage via AGI, that indicates that countries should be much more motivated to pursue AGI.
And I don’t think that within the realm of international agreements and pursuit of AI regulation, postponement is neglected, at least relative to tractability, and policy for AI regulation is certainly an area of active research.
Interesting line of thought. I don’t know who and how, but I still think we should already think about if it would be a good idea in principle.
Can I restate your idea as ‘we have a certain amount of convinced manpower, we should use it for the best purpose, which is AI safety’? I like the way of thinking, but I still think we should use some of them for looking into postponement. Arguments:
- The vast majority of people is unable to contribute meaningfully to AI safety research. Of course all these people could theoretically do whatever makes most money and then donate to AI safety research. But most will not do that in practice. I think many of these people could be used for the much more generic task of convincing others about AI risks, and also arguing for postponement. As an example, I saw a project once with the goal of teaching children about AI safety which claimed they could not continue for lack of 5000$ of funding. I think there’s a vast sea of resource-constrained possibility out there once we make the decision that telling everyone about AI risk is officially a good idea.
- Postponement weirdly seems to be a neglected topic within the AI safety community (for dislike of regulation, I guess), but also outside the community (for lack of AI risk insight). I think it’s a lot more neglected at this point than technical AI safety, which is perhaps also niche, but does have its own institutes already looking at it. Since it looks important and neglected, I think an hour spent on postponement is probably better spent than an hour on AI safety, perhaps unless you’re a talented AI safety researcher.
The idea that most people who can’t do technical AI alignment are therefore able to do effective work in public policy or motivating public change seems unsupported by anything you’ve said. And a key problem with “raising awareness” as a method of risk reduction is that it’s rife with infohazard concerns. For example, if we’re really worried about a country seizing a decisive strategic advantage via AGI, that indicates that countries should be much more motivated to pursue AGI.
And I don’t think that within the realm of international agreements and pursuit of AI regulation, postponement is neglected, at least relative to tractability, and policy for AI regulation is certainly an area of active research.