Navigating AI Risks (NAIR) #1: Slowing Down AI

Link post

Here’s the first edition (on slowdown) of Navigating AI Risks, a newsletter on AI governance that we’re launching with some colleagues.

This newsletter is mostly aimed at policymakers but I expect that it might interest some of you to keep up with the ideas that are around in AI governance.

Here’s a bullet point summary of it:

  • Open letter, signed by Y. Bengio & S. Russell. Focused on the largest language models, no effects on most AI systems.

  • Rationale:

    • Avoid a race to the bottom

    • Let time for society to adapt (regarding white collar jobs vs incredible speed of AI development)

    • Develop basic laws and guardrails

    • Some experts worry about existential risks from AI

    • Foreseeable misuse of AI (disinformation, large-scale hacking)

  • Proposals:

    • 6 month training pause

    • Shut down

    • Conditional slowdown, i.e. slowdown as long as there are safety failures and high risks for society.

  • Difficulties:

    • Coordination is hard

    • China could benefit from it.

Crossposted to EA Forum (0 points, 0 comments)