On the first argument, I replied that I think a non-AGI safety group could do this, and therefore not hurt the principally unrelated AGI safety efforts. Such a group could even call for reduction of existential risk in general, further decoupling the two efforts.
It sounds like you are suggesting that someone somewhere should do this. Who, and how? Because until there is a specific idea being put forward, I can say that pausing AGI would be good, since misaligned AGI would be bad. I don’t know how you’d do it, but if choosing between two world, the one without misaligned AGI seems likely to be better.
But in my mind, the proposal falls apart as soon as you ask who this group is, and whether this hypothetical groups has any leverage or any arguments that would convince people who are not already convinced. If the answer is yes, why do we need this new group to do this, and would we be better off using this leverage to increase the resources and effort put into AI safety?
Interesting line of thought. I don’t know who and how, but I still think we should already think about if it would be a good idea in principle.
Can I restate your idea as ‘we have a certain amount of convinced manpower, we should use it for the best purpose, which is AI safety’? I like the way of thinking, but I still think we should use some of them for looking into postponement. Arguments:
- The vast majority of people is unable to contribute meaningfully to AI safety research. Of course all these people could theoretically do whatever makes most money and then donate to AI safety research. But most will not do that in practice. I think many of these people could be used for the much more generic task of convincing others about AI risks, and also arguing for postponement. As an example, I saw a project once with the goal of teaching children about AI safety which claimed they could not continue for lack of 5000$ of funding. I think there’s a vast sea of resource-constrained possibility out there once we make the decision that telling everyone about AI risk is officially a good idea.
- Postponement weirdly seems to be a neglected topic within the AI safety community (for dislike of regulation, I guess), but also outside the community (for lack of AI risk insight). I think it’s a lot more neglected at this point than technical AI safety, which is perhaps also niche, but does have its own institutes already looking at it. Since it looks important and neglected, I think an hour spent on postponement is probably better spent than an hour on AI safety, perhaps unless you’re a talented AI safety researcher.
The idea that most people who can’t do technical AI alignment are therefore able to do effective work in public policy or motivating public change seems unsupported by anything you’ve said. And a key problem with “raising awareness” as a method of risk reduction is that it’s rife with infohazard concerns. For example, if we’re really worried about a country seizing a decisive strategic advantage via AGI, that indicates that countries should be much more motivated to pursue AGI.
And I don’t think that within the realm of international agreements and pursuit of AI regulation, postponement is neglected, at least relative to tractability, and policy for AI regulation is certainly an area of active research.
It sounds like you are suggesting that someone somewhere should do this. Who, and how? Because until there is a specific idea being put forward, I can say that pausing AGI would be good, since misaligned AGI would be bad. I don’t know how you’d do it, but if choosing between two world, the one without misaligned AGI seems likely to be better.
But in my mind, the proposal falls apart as soon as you ask who this group is, and whether this hypothetical groups has any leverage or any arguments that would convince people who are not already convinced. If the answer is yes, why do we need this new group to do this, and would we be better off using this leverage to increase the resources and effort put into AI safety?
Interesting line of thought. I don’t know who and how, but I still think we should already think about if it would be a good idea in principle.
Can I restate your idea as ‘we have a certain amount of convinced manpower, we should use it for the best purpose, which is AI safety’? I like the way of thinking, but I still think we should use some of them for looking into postponement. Arguments:
- The vast majority of people is unable to contribute meaningfully to AI safety research. Of course all these people could theoretically do whatever makes most money and then donate to AI safety research. But most will not do that in practice. I think many of these people could be used for the much more generic task of convincing others about AI risks, and also arguing for postponement. As an example, I saw a project once with the goal of teaching children about AI safety which claimed they could not continue for lack of 5000$ of funding. I think there’s a vast sea of resource-constrained possibility out there once we make the decision that telling everyone about AI risk is officially a good idea.
- Postponement weirdly seems to be a neglected topic within the AI safety community (for dislike of regulation, I guess), but also outside the community (for lack of AI risk insight). I think it’s a lot more neglected at this point than technical AI safety, which is perhaps also niche, but does have its own institutes already looking at it. Since it looks important and neglected, I think an hour spent on postponement is probably better spent than an hour on AI safety, perhaps unless you’re a talented AI safety researcher.
The idea that most people who can’t do technical AI alignment are therefore able to do effective work in public policy or motivating public change seems unsupported by anything you’ve said. And a key problem with “raising awareness” as a method of risk reduction is that it’s rife with infohazard concerns. For example, if we’re really worried about a country seizing a decisive strategic advantage via AGI, that indicates that countries should be much more motivated to pursue AGI.
And I don’t think that within the realm of international agreements and pursuit of AI regulation, postponement is neglected, at least relative to tractability, and policy for AI regulation is certainly an area of active research.