as far as I’m aware, the biggest contribution to the safety field as a whole is mainly improved datasets, which is recent. The sort of stuff that doesn’t get prioritized in existential ai safety because it’s too short-term and might aid capabilities. In general, I’d recommend reading her papers’ abstracts, but wouldn’t recommend pushing past an abstract you find uninteresting.
as far as I’m aware, the biggest contribution to the safety field as a whole is mainly improved datasets, which is recent. The sort of stuff that doesn’t get prioritized in existential ai safety because it’s too short-term and might aid capabilities. In general, I’d recommend reading her papers’ abstracts, but wouldn’t recommend pushing past an abstract you find uninteresting.
this one is probably the most interesting to me: https://arxiv.org/abs/2212.05129
OH WAIT CRAP THERE ARE TWO MMITCHELLS AND I MEANT THE OTHER ONE. well, uh, anyway, have a link to the other mmitchell’s paper that seems cool.
OP mmitchell also does seem pretty cool, but maybe not quite as close to safety/alignment—her work seems to be focused on adversarial examples: https://melaniemitchell.me/ & https://arxiv.org/abs/2210.13966