So, if I’m interpreting this paper correctly, it suggests that we should be putting effort into two things:
Reducing enmity between teams.
Reducing the number of teams.
It seems as though the first could be achieved in part by accomplishing the second (if we reduce the number of teams by merging), and as team size increases, capability increases, which means that the largest team would lose less by devoting effort into safety measures.
So essentially, we should hope for a monopoly on AI—one that has enough money and influence to absorb the majority of AI researchers and is capable of purchasing the smaller AI groups. This makes me wonder if non-profit groups (if they are involved mainly with AI capability research, not purely safety) are actually capable of fulfilling this role, since they would not have quite the financial advantage that strictly profit-oriented AI organizations would have.
So, if I’m interpreting this paper correctly, it suggests that we should be putting effort into two things:
Reducing enmity between teams.
Reducing the number of teams.
It seems as though the first could be achieved in part by accomplishing the second (if we reduce the number of teams by merging), and as team size increases, capability increases, which means that the largest team would lose less by devoting effort into safety measures.
So essentially, we should hope for a monopoly on AI—one that has enough money and influence to absorb the majority of AI researchers and is capable of purchasing the smaller AI groups. This makes me wonder if non-profit groups (if they are involved mainly with AI capability research, not purely safety) are actually capable of fulfilling this role, since they would not have quite the financial advantage that strictly profit-oriented AI organizations would have.