So, I’m often tempted to mention my x risk motivations only briefly, then focus on whatever’s inferentially closest and still true. (Classically, this would be “misuse risks, especially from foreign adversaries and terrorists” and “bioweapon and cyberoffensive capabilities coming in the next few years”.)
One heuristic that I’m tempted to adopt and recommend is the onion test: your communications don’t have to emphasize your weird beliefs, but you want your communications to satisfy the criterion that if your interlocutor became aware of everything you think, they would not be surprised.
This means that I’ll when I’m talking with a potential ally, I’ll often mostly focus on places where we agree, while also being intentional about flagging that I have disagreements that they could double click on if they wanted.
I’m curious if your sense, Olivia, is that your communications (including the brief communication of x risk) passes the onion test.
And if not, I’m curious what’s hard about meeting that standard. Is this a heuristic that can be made viable in the contexts of eg DC?
One heuristic that I’m tempted to adopt and recommend is the onion test: your communications don’t have to emphasize your weird beliefs, but you want your communications to satisfy the criterion that if your interlocutor became aware of everything you think, they would not be surprised.
This means that I’ll when I’m talking with a potential ally, I’ll often mostly focus on places where we agree, while also being intentional about flagging that I have disagreements that they could double click on if they wanted.
I’m curious if your sense, Olivia, is that your communications (including the brief communication of x risk) passes the onion test.
And if not, I’m curious what’s hard about meeting that standard. Is this a heuristic that can be made viable in the contexts of eg DC?