RSS

tlevin

Karma: 695

(Posting in a personal capacity unless stated otherwise.) I help allocate Open Phil’s resources to improve the governance of AI with a focus on avoiding catastrophic outcomes. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/​safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher.

Not to be confused with the user formerly known as trevor1.

Skep­ti­cism to­wards claims about the views of pow­er­ful institutions

tlevinFeb 13, 2025, 7:40 AM
46 points
2 comments4 min readLW link

A case for donat­ing to AI risk re­duc­tion (in­clud­ing if you work in AI)

tlevinDec 2, 2024, 7:05 PM
61 points
2 commentsLW link

How the AI safety tech­ni­cal land­scape has changed in the last year, ac­cord­ing to some practitioners

tlevinJul 26, 2024, 7:06 PM
55 points
6 comments2 min readLW link

tlevin’s Shortform

tlevinApr 30, 2024, 9:32 PM
4 points
62 commentsLW link

EU poli­cy­mak­ers reach an agree­ment on the AI Act

tlevinDec 15, 2023, 6:02 AM
78 points
7 comments7 min readLW link

Notes on nukes, IR, and AI from “Arse­nals of Folly” (and other books)

tlevinSep 4, 2023, 7:02 PM
11 points
0 comments6 min readLW link

Ap­ply to HAIST/​MAIA’s AI Gover­nance Work­shop in DC (Feb 17-20)

Jan 31, 2023, 2:06 AM
28 points
0 comments2 min readLW link

Up­date on Har­vard AI Safety Team and MIT AI Alignment

Dec 2, 2022, 12:56 AM
60 points
4 comments8 min readLW link