Having documented epistemic consequences of realist metaphysics (https://doi.org/10.6084/m9.figshare.19204896.v1), it has been pointed-out to me that even non-realists might endorse them. That should be no surprise since non-realists are in community with realists, and sharing epistemic practices (like sharing language) makes community easier. No doubt, realists would likewise endorse non-realist practices, in return, if non-realism made any such practice necessary.
What makes this observation relevant to this article is the potential for non-realist positions to become irrelevant in practice because realist epistemic practices so deeply shape the practice of rationality. Community is an asset, so we can expect the best ASI to foster community, and for such community (including the ASI) to engage in realist epistemic practices. One might temporarily isolate into a “truth” selected to bring comfort, and one might even win political power by facilitating such comfort, but community division is unlikely to be sustainable. In other words, realism might inevitably end-up treated as true, even if false.
AI threatens to orchestrate sustainable social reform would belong in the 2025 review, but may I suggest adding a new kind of agenda to your taxonomy for 2025?
Most of your current categories focus on technology, but this article focusses on safety, on the nature of our self-destruction/warfare, and explores what is needed technically from AI to solve it. It sees caste systems that predate AI, notes that they are dangerous (perhaps increasingly dangerous) and how to adjust AI designs and evaluation processes accordingly.
Perhaps the title of the agenda could be “Understand safety” or “Understand ourselves”, and the increase in social impact research could reflect here.