My personal impression is you are mistaken and the innovation have not stopped, but part of the conversation moved elsewhere. E.g. taking just ACS, we do have ideas from past 12 months which in our ideal world would fit into this type of glossary—free energy equilibria, levels of sharpness, convergent abstractions, gradual disempowerment risks. Personally I don’t feel it is high priority to write them for LW, because they don’t fit into the current zeitgeist of the site, which seems directing a lot of attention mostly to: - advocacy— topics a large crowd cares about (e.g. mech interpretability) - or topics some prolific and good writer cares about (e.g. people will read posts by John Wentworth) Hot take, but the community loosely associated with active inference is currently better place to think about agent foundations; workshops on topics like ‘pluralistic alignment’ or ‘collective intelligence’ have in total more interesting new ideas about what was traditionally understood as alignment; parts of AI safety went totally ML-mainstream, with the fastest conversation happening at x.
My personal impression is you are mistaken and the innovation have not stopped, but part of the conversation moved elsewhere. E.g. taking just ACS, we do have ideas from past 12 months which in our ideal world would fit into this type of glossary—free energy equilibria, levels of sharpness, convergent abstractions, gradual disempowerment risks. Personally I don’t feel it is high priority to write them for LW, because they don’t fit into the current zeitgeist of the site, which seems directing a lot of attention mostly to:
- advocacy—
topics a large crowd cares about (e.g. mech interpretability)
- or topics some prolific and good writer cares about (e.g. people will read posts by John Wentworth)
Hot take, but the community loosely associated with active inference is currently better place to think about agent foundations; workshops on topics like ‘pluralistic alignment’ or ‘collective intelligence’ have in total more interesting new ideas about what was traditionally understood as alignment; parts of AI safety went totally ML-mainstream, with the fastest conversation happening at x.