Your advice seems pretty close to what I’m saying in this post, just with a different framing. Instead of “don’t think in terms of comparative advantage” I’m saying “be careful when thinking in terms of comparative advantage because it’s probably trickier than you think”. I guess my framing is more useful when someone already tends to think in terms of comparative advantage (for example because they learned about it in economics and it seems like a really important insight).
Most of all, if your project is built around a potential secret, then really go for it.
I’d add that in some situations (e.g., if the secret is relevant to some altruistic aim) instead of building a project around the secret (with the connotation of keeping it safe), the aim of your project should be to first disperse the secret amongst others who can help, and then possibly stepping out of the way once that’s done.
I’d add that in some situations (e.g., if the secret is relevant to some altruistic aim) instead of building a project around the secret (with the connotation of keeping it safe), the aim of your project should be to first disperse the secret amongst others who can help, and then possibly stepping out of the way once that’s done.
I do think there’s variance in the communicability of such insights. I think that, for example, Holden when thinking of starting GiveWell or Eliezer when thinking of building MIRI (initiall SIAI) both correctly just tried to build the thing they believed could exist, rather than first lower the inferential gap such that a much larger community could understand it. OTOH EY wrote the sequences, Holden has put a lot of work into making OpenPhil+GiveWell’s decision making understandable, and these have both had massive payoffs.
Your advice seems pretty close to what I’m saying in this post, just with a different framing. Instead of “don’t think in terms of comparative advantage” I’m saying “be careful when thinking in terms of comparative advantage because it’s probably trickier than you think”. I guess my framing is more useful when someone already tends to think in terms of comparative advantage (for example because they learned about it in economics and it seems like a really important insight).
I’d add that in some situations (e.g., if the secret is relevant to some altruistic aim) instead of building a project around the secret (with the connotation of keeping it safe), the aim of your project should be to first disperse the secret amongst others who can help, and then possibly stepping out of the way once that’s done.
That seems correct.
I do think there’s variance in the communicability of such insights. I think that, for example, Holden when thinking of starting GiveWell or Eliezer when thinking of building MIRI (initiall SIAI) both correctly just tried to build the thing they believed could exist, rather than first lower the inferential gap such that a much larger community could understand it. OTOH EY wrote the sequences, Holden has put a lot of work into making OpenPhil+GiveWell’s decision making understandable, and these have both had massive payoffs.