I’ve read the 2021 MIRI conversations sequence, and various other writings by Nate and Eliezer. I found their explanations of convergent instrumental goals, agency, and various other topics convincing and explanatory, without much further thinking of my own.
I think in most or all cases, they were doing their best to explain clearly, without worrying much about infohazards. But the concepts are complicated and counter-intuitive, and sometimes when their explanations weren’t landing, they decided to move on to other topics.
So, I think you should feel free to try communicating as clearly as you can, without holding back because of worries about infohazards. Perhaps you’ll succeed in explaining where others have failed.
(Also, if you do succeed in writing what you think is an infohazardously-good explanation, you can just ask someone you trust to read it privately before posting it publicly.)
I’ve read the 2021 MIRI conversations sequence, and various other writings by Nate and Eliezer. I found their explanations of convergent instrumental goals, agency, and various other topics convincing and explanatory, without much further thinking of my own.
I think in most or all cases, they were doing their best to explain clearly, without worrying much about infohazards. But the concepts are complicated and counter-intuitive, and sometimes when their explanations weren’t landing, they decided to move on to other topics.
So, I think you should feel free to try communicating as clearly as you can, without holding back because of worries about infohazards. Perhaps you’ll succeed in explaining where others have failed.
(Also, if you do succeed in writing what you think is an infohazardously-good explanation, you can just ask someone you trust to read it privately before posting it publicly.)