let us be clear: hiding your beliefs, in ways that predictably leads people to believe false things, is lying. This is the case regardless of your intentions, and regardless of how it feels.
Not only is it morally wrong, it makes for a terrible strategy. As it stands, the AI Safety Community itself can not coordinate to state that we should stop AGI progress right now!
Some dynamics and gears in world models are protected secrets, when they should be open-sourced and researched by more people, and other gears are open-sourced and researched by too many people, when they should be protected secrets. Some things are protected secrets and should be, some things are open-source research and should be.
Each individual thing is determined (and disputed) on a case-by-case basis. For example, I think that the contemporary use of AI for human thought steering should be a gear in more people’s world models, and other people don’t, but we have reasons specific to that topic. There’s no all-encompassing policy here; staying silent can cause massive amounts of harm, but the counterfactual (telling everyone) can sometimes cause much more harm.
Some dynamics and gears in world models are protected secrets, when they should be open-sourced and researched by more people, and other gears are open-sourced and researched by too many people, when they should be protected secrets. Some things are protected secrets and should be, some things are open-source research and should be.
Each individual thing is determined (and disputed) on a case-by-case basis. For example, I think that the contemporary use of AI for human thought steering should be a gear in more people’s world models, and other people don’t, but we have reasons specific to that topic. There’s no all-encompassing policy here; staying silent can cause massive amounts of harm, but the counterfactual (telling everyone) can sometimes cause much more harm.