Also positive update for me on interdisciplinary conceptual alignment being automatable differentially soon; which seemed to me for a long time plausible, since LLMs have ‘read the whole internet’ and interdisciplinary insights often seem (to me) to require relatively small numbers of inferential hops (plausibly because it’s hard for humans to have [especially deep] expertise in many different domains), making them potentially feasible for LLMs differentially early (reliably making long inferential chains still seems among the harder things for LLMs).
Also positive update for me on interdisciplinary conceptual alignment being automatable differentially soon; which seemed to me for a long time plausible, since LLMs have ‘read the whole internet’ and interdisciplinary insights often seem (to me) to require relatively small numbers of inferential hops (plausibly because it’s hard for humans to have [especially deep] expertise in many different domains), making them potentially feasible for LLMs differentially early (reliably making long inferential chains still seems among the harder things for LLMs).