(Back in 2017 I asked for examples of risk from AI, and didn’t like any of them all that much. Today, “someone asks an LLM how to kill everyone and it walks them through creating a pandemic” seems pretty plausible.)
My impression from the 2017 post is that concerns were framed as “superintelligence risk” at the time. The intended meaning of that term wasn’t captured in the old post, but it’s not clear to me that an LLM answering questions about how to create a pandemic qualifies as superintelligence?
This contrast seems mostly aligned with my long-standing instinct that folks worried about catastrophic risk from AI have tended to spend too much time worrying about machines achieving agency and not enough time thinking about machines scaling up the agency of individual humans.
My impression from the 2017 post is that concerns were framed as “superintelligence risk” at the time. The intended meaning of that term wasn’t captured in the old post, but it’s not clear to me that an LLM answering questions about how to create a pandemic qualifies as superintelligence?
This contrast seems mostly aligned with my long-standing instinct that folks worried about catastrophic risk from AI have tended to spend too much time worrying about machines achieving agency and not enough time thinking about machines scaling up the agency of individual humans.