You say that we have no plan to solve AI risk, so we cannot communicate that plan. That is not the same as not being able to take any beneficial actions.
“We do not know the perfect thing to do, therefore we cannot do anything”
Do we not consider timeline-extending things to be worthwhile?
This is a genuine question: Is it the prevailing wisdom that ultimately solving AI X-risk is the only worthwhile path and only work worthy of pursing? This seems to have been Eliezer’s opinion prior to GPT-3ish. That would answer the questions of my original post.
For example: MIRI could have established, funded, integrated, and entrenched a Thinktank / Policy group in DC with the express goal of being able to make political movement when the time came.
Right now today they could be using those levers in DC to push “Regulate all training runs” or “Regulate Card production”. In the way that actually gets things done in DC, not just on Twitter.
Clearly those are not going to solve the X-risk, but it is also seems pretty clear to me that at the present moment something like those things would extend timelines.
To answer my own question you might say:
Prior to 2022 it was not obvious that the political arena would be a leverage point against AI risk (neither for ultimate fix or even for extending timelines.)
MIRI/CFAR did not have the resources to commit to something like that. It was considered but rejected for what we thought were higher leverage research possibilities.
You say that we have no plan to solve AI risk, so we cannot communicate that plan. That is not the same as not being able to take any beneficial actions.
“We do not know the perfect thing to do, therefore we cannot do anything”
One of the plans that were actually done was OpenAI which actually accelerated timelines. Taking action and that action having a positive effect is not the same thing.
There are things you can do to slow down AI progress but if humanity dies one or two year later then otherwise that’s not still not a very desirable outcome. Without having a plan that has a desirable outcome it’s hard to convince people of it.
Politics tends to be pretty hostile for clear thinking and MIRI and CFAR both thought that creating an environment where clear thinking can happen well is crucial for solving AI risk.
You say that we have no plan to solve AI risk, so we cannot communicate that plan. That is not the same as not being able to take any beneficial actions.
“We do not know the perfect thing to do, therefore we cannot do anything”
Do we not consider timeline-extending things to be worthwhile?
This is a genuine question: Is it the prevailing wisdom that ultimately solving AI X-risk is the only worthwhile path and only work worthy of pursing? This seems to have been Eliezer’s opinion prior to GPT-3ish. That would answer the questions of my original post.
For example: MIRI could have established, funded, integrated, and entrenched a Thinktank / Policy group in DC with the express goal of being able to make political movement when the time came.
Right now today they could be using those levers in DC to push “Regulate all training runs” or “Regulate Card production”. In the way that actually gets things done in DC, not just on Twitter.
Clearly those are not going to solve the X-risk, but it is also seems pretty clear to me that at the present moment something like those things would extend timelines.
To answer my own question you might say:
Prior to 2022 it was not obvious that the political arena would be a leverage point against AI risk (neither for ultimate fix or even for extending timelines.)
MIRI/CFAR did not have the resources to commit to something like that. It was considered but rejected for what we thought were higher leverage research possibilities.
One of the plans that were actually done was OpenAI which actually accelerated timelines. Taking action and that action having a positive effect is not the same thing.
There are things you can do to slow down AI progress but if humanity dies one or two year later then otherwise that’s not still not a very desirable outcome. Without having a plan that has a desirable outcome it’s hard to convince people of it.
Politics tends to be pretty hostile for clear thinking and MIRI and CFAR both thought that creating an environment where clear thinking can happen well is crucial for solving AI risk.