Eh. It’s not unusual for the government to get experts together and ask in a general sense for worst-case scenario possible disaster situations, with the intent of then working to reduce those risks.
Open-ended brainstorming about some potential AI risk scenarios that could happen in the near future might be useful, if the overall goal of MIRI is to reduce AI risk.
Eh. It’s not unusual for the government to get experts together and ask in a general sense for worst-case scenario possible disaster situations, with the intent of then working to reduce those risks.
Open-ended brainstorming about some potential AI risk scenarios that could happen in the near future might be useful, if the overall goal of MIRI is to reduce AI risk.
MIRI is not the government, LW is not a panel of experts, and such analyses generally start with a long list of things they are conditional on.
No AI risk scenarios are going to happen in the near future.