RSS

Ram Potham

Karma: 51

My goal is to do work that counterfactually reduces AI risk from loss-of-control scenarios. My perspective is shaped by my experience as the founder of a VC-backed AI startup, which gave me a firsthand understanding of the urgent need for safety.

I have a B.S. in Artificial Intelligence from Carnegie Mellon and am currently a CBAI Fellow at MIT/​Harvard. My primary project is ForecastLabs, where I’m building predictive maps of the AI landscape to improve strategic foresight.

I subscribe to Crocker’s Rules and am especially interested to hear unsolicited constructive criticism. http://​​sl4.org/​​crocker.html—inspired by Daniel Kokotajlo.

(xkcd meme)

(xkcd meme)

I Tested LLM Agents on Sim­ple Safety Rules. They Failed in Sur­pris­ing and In­for­ma­tive Ways.

Ram PothamJun 25, 2025, 9:39 PM
8 points
5 comments6 min readLW link

AI Con­trol Meth­ods Liter­a­ture Review

Ram PothamApr 18, 2025, 9:15 PM
9 points
1 comment9 min readLW link