You say that we have no plan to solve AI risk, so we cannot communicate that plan. That is not the same as not being able to take any beneficial actions.
“We do not know the perfect thing to do, therefore we cannot do anything”
One of the plans that were actually done was OpenAI which actually accelerated timelines. Taking action and that action having a positive effect is not the same thing.
There are things you can do to slow down AI progress but if humanity dies one or two year later then otherwise that’s not still not a very desirable outcome. Without having a plan that has a desirable outcome it’s hard to convince people of it.
Politics tends to be pretty hostile for clear thinking and MIRI and CFAR both thought that creating an environment where clear thinking can happen well is crucial for solving AI risk.
One of the plans that were actually done was OpenAI which actually accelerated timelines. Taking action and that action having a positive effect is not the same thing.
There are things you can do to slow down AI progress but if humanity dies one or two year later then otherwise that’s not still not a very desirable outcome. Without having a plan that has a desirable outcome it’s hard to convince people of it.
Politics tends to be pretty hostile for clear thinking and MIRI and CFAR both thought that creating an environment where clear thinking can happen well is crucial for solving AI risk.