RSS

tlevin

Karma: 615

(Posting in a personal capacity unless stated otherwise.) I help allocate Open Phil’s resources to improve the governance of AI with a focus on avoiding catastrophic outcomes. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/​safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher.

Not to be confused with the user formerly known as trevor1.

A case for donat­ing to AI risk re­duc­tion (in­clud­ing if you work in AI)

tlevin2 Dec 2024 19:05 UTC
61 points
2 comments1 min readLW link

How the AI safety tech­ni­cal land­scape has changed in the last year, ac­cord­ing to some practitioners

tlevin26 Jul 2024 19:06 UTC
55 points
6 comments2 min readLW link

tlevin’s Shortform

tlevin30 Apr 2024 21:32 UTC
4 points
54 comments1 min readLW link

EU poli­cy­mak­ers reach an agree­ment on the AI Act

tlevin15 Dec 2023 6:02 UTC
78 points
7 comments7 min readLW link

Notes on nukes, IR, and AI from “Arse­nals of Folly” (and other books)

tlevin4 Sep 2023 19:02 UTC
11 points
0 comments6 min readLW link

Ap­ply to HAIST/​MAIA’s AI Gover­nance Work­shop in DC (Feb 17-20)

31 Jan 2023 2:06 UTC
28 points
0 comments2 min readLW link

Up­date on Har­vard AI Safety Team and MIT AI Alignment

2 Dec 2022 0:56 UTC
60 points
4 comments8 min readLW link