Currently an independent AI Safety researcher. Ex software developer, ex QA.
Prior to working in industry was involved with academic research of cognitive architectures (the old ones). I’m a generalist with a focus on human-like AIs (know a couple of things about developmental psychology, cognitive science, ethology, computational models of the mind).
Personal research vectors: ontogenetic curriculum and the narrative theory. The primary theme is consolidating insights from various mind related areas into plausible explanation of human value dynamics.
A long-time lesswronger (~8 years). Mostly been active in the local LW community (as a consumer and as an org).
Recently I’ve organised a sort peer-to-peer accelerator for anyone who wants to become AI Safety researcher. Right now there are 17 of us.
Was a part of AI Safety Camp 2023 (Positive Attractors team).
Agreed. That said, some efforts in this direction do exist. for example Ekdeep Singh Lubana and his Explaining Emergence in NN with Model Systems Analysis
https://ekdeepslubana.github.io/