I’d add that in some situations (e.g., if the secret is relevant to some altruistic aim) instead of building a project around the secret (with the connotation of keeping it safe), the aim of your project should be to first disperse the secret amongst others who can help, and then possibly stepping out of the way once that’s done.
I do think there’s variance in the communicability of such insights. I think that, for example, Holden when thinking of starting GiveWell or Eliezer when thinking of building MIRI (initiall SIAI) both correctly just tried to build the thing they believed could exist, rather than first lower the inferential gap such that a much larger community could understand it. OTOH EY wrote the sequences, Holden has put a lot of work into making OpenPhil+GiveWell’s decision making understandable, and these have both had massive payoffs.
That seems correct.
I do think there’s variance in the communicability of such insights. I think that, for example, Holden when thinking of starting GiveWell or Eliezer when thinking of building MIRI (initiall SIAI) both correctly just tried to build the thing they believed could exist, rather than first lower the inferential gap such that a much larger community could understand it. OTOH EY wrote the sequences, Holden has put a lot of work into making OpenPhil+GiveWell’s decision making understandable, and these have both had massive payoffs.