The assumption that AGI is a likely development within coming decades is quite controversial among ML researchers. ICML reviewers might wonder why this claim is justified and how much of the paper is relevant if you’re more dubious about the development of AGI.
IMO this. For a legible paper, you more or less shouldn’t assume it, but rather suggest consequences.
The definition of situational awareness feels quite vague to me. To me, the definition (“identifying which abstract knowledge is relevant to the context in which they’re being run, and applying that knowledge when choosing actions”) seems to include encompass, for example, the ability to ingest information such as “pawns can attack diagonally” and apply that to playing a game of chess. Ajeya’s explanation of situational awareness feels much clearer to me.
IMO this. For a legible paper, you more or less shouldn’t assume it, but rather suggest consequences.
Yeah, I think copying Ajeya is good.