RSS

Vael Gates

Karma: 743

Offer­ing AI safety sup­port calls for ML professionals

Vael GatesFeb 15, 2024, 11:48 PM
61 points
1 commentLW link

Ret­ro­spec­tive on the AI Safety Field Build­ing Hub

Vael GatesFeb 2, 2023, 2:06 AM
30 points
0 commentsLW link

In­ter­views with 97 AI Re­searchers: Quan­ti­ta­tive Analysis

Feb 2, 2023, 1:01 AM
23 points
0 comments7 min readLW link

“AI Risk Dis­cus­sions” web­site: Ex­plor­ing in­ter­views from 97 AI Researchers

Feb 2, 2023, 1:00 AM
43 points
1 commentLW link

Pre­dict­ing re­searcher in­ter­est in AI alignment

Vael GatesFeb 2, 2023, 12:58 AM
25 points
0 commentsLW link

What AI Safety Ma­te­ri­als Do ML Re­searchers Find Com­pel­ling?

Dec 28, 2022, 2:03 AM
175 points
34 comments2 min readLW link

An­nounc­ing the AI Safety Field Build­ing Hub, a new effort to provide AISFB pro­jects, men­tor­ship, and funding

Vael GatesJul 28, 2022, 9:29 PM
49 points
3 comments6 min readLW link

Re­sources I send to AI re­searchers about AI safety

Vael GatesJun 14, 2022, 2:24 AM
69 points
12 comments1 min readLW link

Vael Gates: Risks from Ad­vanced AI (June 2022)

Vael GatesJun 14, 2022, 12:54 AM
38 points
2 comments30 min readLW link

Tran­scripts of in­ter­views with AI researchers

Vael GatesMay 9, 2022, 5:57 AM
170 points
9 comments2 min readLW link

Self-study­ing to de­velop an in­side-view model of AI al­ign­ment; co-studiers wel­come!

Vael GatesNov 30, 2021, 9:25 AM
13 points
0 comments4 min readLW link