How concerned are you about insights on this question also applying to unaligned AGI development?
Enough that I have considered keeping it secret, but I think keeping it public is a strong net positive relative to our current state (i.e. giant inscrutable vectors of floating-points). If there were, say, another AI winter, then I could easily imagine changing my mind about that.
Enough that I have considered keeping it secret, but I think keeping it public is a strong net positive relative to our current state (i.e. giant inscrutable vectors of floating-points). If there were, say, another AI winter, then I could easily imagine changing my mind about that.