MIRI “giving up” on solving the problem was probably a net negative to the community, since it severely demoralized many young, motivated individuals who might have worked toward actually solving the problem. An excellent way to prevent pathways to victory is by convincing people those pathways are not attainable. A positive, I suppose, is that many have stopped looking to Yudkowsky and MIRI for the solutions, since it’s obvious they have none.
But it seems like a good thing to do if indeed the solutions are not attainable.
Anyway, this whole question seems on the wrong level analysis. You should do what you think works, not what you think doesn’t work but might trick others into trying anyway.
Added: To be clear I too found MIRI largely giving up on solving the alignment problem demoralizing. I’m still going to keep working on preventing the end of the world regardless, and I don’t at all begrudge them seriously trying for 5-10 years.
MIRI “giving up” on solving the problem was probably a net negative to the community, since it severely demoralized many young, motivated individuals who might have worked toward actually solving the problem. An excellent way to prevent pathways to victory is by convincing people those pathways are not attainable. A positive, I suppose, is that many have stopped looking to Yudkowsky and MIRI for the solutions, since it’s obvious they have none.
But it seems like a good thing to do if indeed the solutions are not attainable.
Anyway, this whole question seems on the wrong level analysis. You should do what you think works, not what you think doesn’t work but might trick others into trying anyway.
Added: To be clear I too found MIRI largely giving up on solving the alignment problem demoralizing. I’m still going to keep working on preventing the end of the world regardless, and I don’t at all begrudge them seriously trying for 5-10 years.