I don’t believe there was a strategic update in favor of reducing secrecy at MIRI. My model is that everything that they said would be secret, is still secret. The increase in public writing is not because it became more promising, but because all their other work became less.
There used to be very strong secrecy norms at MIRI. There was a strategic update on the usefulness of public debate and reducing secrecy.
Everything that’s in the AI alignment forum gets per default also shown on LessWrong. The AI alignment forum is a way to filter out amateur work.
I don’t believe there was a strategic update in favor of reducing secrecy at MIRI. My model is that everything that they said would be secret, is still secret. The increase in public writing is not because it became more promising, but because all their other work became less.
Maybe saying “secrecy” is the wrong way to phrase it. The main point is that MIRI strategy shifted toward more public writing.