I’ve written a post about my thoughts related to this, but I haven’t gone specifically into whether UI tools help alignment or capabilities more. It kind of touches on “sharing vs keeping secret” in a general way, but not head-on such that I can just write a tldr here, and not along the threads we started here. Except maybe “broader discussion/sharing/enhanced cognition gives more coordination but risks world-ending discoveries being found before coordination saves us”—not a direct quote.
But I found it too difficult to think about, and it (feeling like I have to reply here first) was blocking me from digging into other subjects and developing my ideas, so I just went on with it.
@Tamsin Leake
I’ve written a post about my thoughts related to this, but I haven’t gone specifically into whether UI tools help alignment or capabilities more. It kind of touches on “sharing vs keeping secret” in a general way, but not head-on such that I can just write a tldr here, and not along the threads we started here. Except maybe “broader discussion/sharing/enhanced cognition gives more coordination but risks world-ending discoveries being found before coordination saves us”—not a direct quote.
But I found it too difficult to think about, and it (feeling like I have to reply here first) was blocking me from digging into other subjects and developing my ideas, so I just went on with it.
https://www.lesswrong.com/posts/GtZ5NM9nvnddnCGGr/ai-alignment-via-civilizational-cognitive-updates