RSS

RohanS

Karma: 232

I aim to promote welfare and reduce suffering as much as possible. This has led me to work on AGI safety research. I am particularly interested in foundation model agents (FMAs): systems like AutoGPT and Devin that equip foundation models with memory, tool use, and other affordances so they can perform multi-step tasks autonomously.

Previously, I completed an undergrad in CS and Math at Columbia, where I helped run Columbia Effective Altruism and Columbia AI Alignment Club (CAIAC).

Ro­hanS’s Shortform

RohanSDec 31, 2024, 4:11 PM
3 points
24 commentsLW link

~80 In­ter­est­ing Ques­tions about Foun­da­tion Model Agent Safety

Oct 28, 2024, 4:37 PM
46 points
4 comments15 min readLW link

Trans­form­ers Ex­plained (Again)

RohanSOct 22, 2024, 4:06 AM
4 points
0 comments18 min readLW link

Ap­ply to Aether—In­de­pen­dent LLM Agent Safety Re­search Group

RohanSAug 21, 2024, 9:47 AM
9 points
0 comments7 min readLW link
(forum.effectivealtruism.org)

Notes on “How do we be­come con­fi­dent in the safety of a ma­chine learn­ing sys­tem?”

RohanSOct 26, 2023, 3:13 AM
4 points
0 comments13 min readLW link

Quick Thoughts on Lan­guage Models

RohanSJul 18, 2023, 8:38 PM
6 points
0 comments4 min readLW link

~100 In­ter­est­ing Questions

RohanSMar 30, 2023, 1:57 PM
53 points
18 comments9 min readLW link

A Thor­ough In­tro­duc­tion to Abstraction

RohanSJan 13, 2023, 12:30 AM
9 points
1 comment18 min readLW link

Con­tent and Take­aways from SERI MATS Train­ing Pro­gram with John Wentworth

RohanSDec 24, 2022, 4:17 AM
28 points
3 comments12 min readLW link

Fol­low along with Columbia EA’s Ad­vanced AI Safety Fel­low­ship!

RohanSJul 2, 2022, 5:45 PM
3 points
0 comments2 min readLW link
(forum.effectivealtruism.org)