I think deeply understanding top tier capabilities researchers’ views on how to achieve AGI is actually extremely valuable for thinking about alignment. Even if you disagree on object level views, understanding how very smart people come to their conclusions is very valuable.
I think the first sentence is true (especially for alignment strategy), but the second sentence seems sort of… broad-life-advice-ish, instead of a specific tip? It’s a pretty indirect help to most kinds of alignment.
Otherwise, this comment’s points really do seem like empirical things that people could put odds or ratios on. Wondering if a more-specific version of those “AI Views Snapshots” would be warranted, for these sorts of “research meta-knowledge” cruxes. Heck, it might be good to have lots of AI Views Snapshot DLC Mini-Charts, from for-specific-research-agendas(?) to internal-to-organizations(?!?!?!?).
I think the first sentence is true (especially for alignment strategy), but the second sentence seems sort of… broad-life-advice-ish, instead of a specific tip? It’s a pretty indirect help to most kinds of alignment.
Otherwise, this comment’s points really do seem like empirical things that people could put odds or ratios on. Wondering if a more-specific version of those “AI Views Snapshots” would be warranted, for these sorts of “research meta-knowledge” cruxes. Heck, it might be good to have lots of AI Views Snapshot DLC Mini-Charts, from for-specific-research-agendas(?) to internal-to-organizations(?!?!?!?).
Further observation about that second sentence.