You might be interested in this AI safety camp ’23 project I proposed of fine-tuning LMs on fMRI data and in some of the linkposts I’ve published on LW, including e.g. The neuroconnectionist research programme, Scaling laws for language encoding models in fMRI and Mapping Brains with Language Models: A Survey. Personally, I’m particularly interested in low-res uploads for automated alignment research, e.g. to plug into something like the superalignment plan (I have some shortform notes on this).
Current theme: default
Less Wrong (text)
Less Wrong (link)
You might be interested in this AI safety camp ’23 project I proposed of fine-tuning LMs on fMRI data and in some of the linkposts I’ve published on LW, including e.g. The neuroconnectionist research programme, Scaling laws for language encoding models in fMRI and Mapping Brains with Language Models: A Survey. Personally, I’m particularly interested in low-res uploads for automated alignment research, e.g. to plug into something like the superalignment plan (I have some shortform notes on this).