Agreed but initially downvoted due to being obviously unproductive, but then upvoted for being an exquisite proof by absurdity about what’s productive: This is the first time I have seen clearly how good communication must forbid some amount of nuance.
The insight: You have a limited amount of time to communicate arguments and models; methods for reproducing some of your beliefs. With most people, you will never have enough time to transmit our entire technoeschatology or xenoeconomics stuff. It is useless to make claims about it, as the recipient has no way of checking them for errors or deceptions. You can only communicate approximations and submodules. No one will ever see the whole truth. (You do not see the whole truth. Your organization, even just within itself, will never agree about the whole truth.)
Agreed but initially downvoted due to being obviously unproductive, but then upvoted for being an exquisite proof by absurdity about what’s productive: This is the first time I have seen clearly how good communication must forbid some amount of nuance.
The insight: You have a limited amount of time to communicate arguments and models; methods for reproducing some of your beliefs. With most people, you will never have enough time to transmit our entire technoeschatology or xenoeconomics stuff. It is useless to make claims about it, as the recipient has no way of checking them for errors or deceptions. You can only communicate approximations and submodules. No one will ever see the whole truth. (You do not see the whole truth. Your organization, even just within itself, will never agree about the whole truth.)
I don’t think you should generally upvote things on the basis of indirectly explaining things via being unproductive lol.
I guess in this case I’m arguing that it’s accidentally, accidentally, productive.
I wrote [a two paragraph explanation](https://www.lesswrong.com/posts/4ceKBbcpGuqqknCj9/the-two-paragraph-argument-for-ai-risk of AI doom not too long ago.