In AI Alignment we luckily have the AI Alignment Newsletter, which seems to cover basically everything happening in the field
Depends on what you call “the field”: there’s a fair number of judgment calls on my part, and the summaries are definitely biased towards things I can understand quickly. (For example, many short LW posts about AI alignment don’t make it into the newsletter.)
Depends on what you call “the field”: there’s a fair number of judgment calls on my part, and the summaries are definitely biased towards things I can understand quickly. (For example, many short LW posts about AI alignment don’t make it into the newsletter.)
Yeah, agree. Edited to clarify a bit.