RSS

Lee Sharkey

Karma: 1,712

Goodfire (London). Formerly cofounded Apollo Research.

My main research interests are mechanistic interpretability and inner alignment.

‘Fun­da­men­tal’ vs ‘ap­plied’ mechanis­tic in­ter­pretabil­ity research

Lee SharkeyMay 23, 2023, 6:26 PM
65 points
6 comments3 min readLW link

A tech­ni­cal note on bil­in­ear lay­ers for interpretability

Lee SharkeyMay 8, 2023, 6:06 AM
59 points
0 comments1 min readLW link
(arxiv.org)

A small up­date to the Sparse Cod­ing in­terim re­search report

Apr 30, 2023, 7:54 PM
61 points
5 comments1 min readLW link

Why al­most ev­ery RL agent does learned optimization

Lee SharkeyFeb 12, 2023, 4:58 AM
32 points
3 comments5 min readLW link

[In­terim re­search re­port] Tak­ing fea­tures out of su­per­po­si­tion with sparse autoencoders

Dec 13, 2022, 3:41 PM
150 points
23 comments22 min readLW link2 reviews

Cur­rent themes in mechanis­tic in­ter­pretabil­ity research

Nov 16, 2022, 2:14 PM
89 points
2 comments12 min readLW link

In­ter­pret­ing Neu­ral Net­works through the Poly­tope Lens

Sep 23, 2022, 5:58 PM
144 points
29 comments33 min readLW link

Cir­cum­vent­ing in­ter­pretabil­ity: How to defeat mind-readers

Lee SharkeyJul 14, 2022, 4:59 PM
114 points
15 comments33 min readLW link