I read this as being “maxipok”, with a few key extensions:
The ‘default’ probability of success is very low
There are lots of plans that look like they give some small-but-relatively-attractive probability of success, which are basically all fake / picked by motivated reasoning of “there has to be a plan.” (“If we cause WWIII, then there will be a 2% chance of aligning AI, right?”)
While there aren’t accessible plans that cause success all on their own, there probably are lots of accessible sub-plans which make it more likely that a surprising real plan could succeed. (“Electing a rationalist president won’t solve the problem on its own, but it does mean ‘letters from Einstein’ are more likely to work.”)
I read this as being “maxipok”, with a few key extensions:
The ‘default’ probability of success is very low
There are lots of plans that look like they give some small-but-relatively-attractive probability of success, which are basically all fake / picked by motivated reasoning of “there has to be a plan.” (“If we cause WWIII, then there will be a 2% chance of aligning AI, right?”)
While there aren’t accessible plans that cause success all on their own, there probably are lots of accessible sub-plans which make it more likely that a surprising real plan could succeed. (“Electing a rationalist president won’t solve the problem on its own, but it does mean ‘letters from Einstein’ are more likely to work.”)