I’m an admin of LessWrong. Here are a few things about me.
I generally feel more hopeful about a situation when I understand it better.
I have signed no contracts nor made any agreements whose existence I cannot mention.
I believe it is good take responsibility for accurately and honestly informing people of what you believe in all conversations; and also good to cultivate an active recklessness for the social consequences of doing so.
It is wrong to directly cause the end of the world. Even if you are fatalistic about what is going to happen.
I wrote this because I am increasingly noticing that the rules for “which worlds to keep in mind/optimize” are often quite different from “which worlds my spreadsheets say are the most likely worlds”. And that this is in conflict with my heuristics which would’ve said “optimize the world-models in your head for being the most accurate ones – the ones that will give you the most accurate answers to most questions” rather than something like “optimize the world-models in your head for being the most useful ones”.
(Though the true answer is some more complicated function combining both practical utility and map-territory correspondence.)
I will note that what is confusing to one person need not be confusing to another person. In my experience it is a common state of affairs for one person in a conversation to be confused and the other not (whether it be because the latter person is pre-confused, post-confused, or simply because their path to understanding a phenomena didn’t pass through the state of their interlocutor).
It seems probable to me that I have found this subject more confusing than have others.