RSS

ozhang

Karma: 370

$250K in Prizes: SafeBench Com­pe­ti­tion An­nounce­ment

ozhangApr 3, 2024, 10:07 PM
26 points
0 comments1 min readLW link

AI Safety Newslet­ter #4: AI and Cy­ber­se­cu­rity, Per­sua­sive AIs, Weaponiza­tion, and Ge­offrey Hin­ton talks AI risks

May 2, 2023, 6:41 PM
32 points
0 comments5 min readLW link
(newsletter.safe.ai)

AI Safety Newslet­ter #3: AI policy pro­pos­als and a new challenger approaches

ozhangApr 25, 2023, 4:15 PM
33 points
0 comments1 min readLW link

AI Safety Newslet­ter #2: ChaosGPT, Nat­u­ral Selec­tion, and AI Safety in the Media

Apr 18, 2023, 6:44 PM
30 points
0 comments4 min readLW link
(newsletter.safe.ai)

AI Safety Newslet­ter #1 [CAIS Linkpost]

Apr 10, 2023, 8:18 PM
45 points
0 comments4 min readLW link
(newsletter.safe.ai)

An­nounc­ing the In­tro­duc­tion to ML Safety course

Aug 6, 2022, 2:46 AM
73 points
6 comments7 min readLW link

$20K In Boun­ties for AI Safety Public Materials

Aug 5, 2022, 2:52 AM
71 points
9 comments6 min readLW link

In­tro­duc­ing the ML Safety Schol­ars Program

May 4, 2022, 4:01 PM
74 points
3 comments3 min readLW link

SERI ML Align­ment The­ory Schol­ars Pro­gram 2022

Apr 27, 2022, 12:43 AM
67 points
6 comments3 min readLW link

[$20K in Prizes] AI Safety Ar­gu­ments Competition

Apr 26, 2022, 4:13 PM
75 points
518 comments3 min readLW link

ML Align­ment The­ory Pro­gram un­der Evan Hubinger

Dec 6, 2021, 12:03 AM
82 points
3 comments2 min readLW link