RSS

1a3orn

Karma: 4,266

1a3orn.com

Claude’s Con­sti­tu­tional Con­se­quen­tial­ism?

1a3ornDec 19, 2024, 7:53 PM
43 points
6 comments6 min readLW link

1a3orn’s Shortform

1a3ornJan 5, 2024, 3:04 PM
5 points
9 comments1 min readLW link

Pro­pa­ganda or Science: A Look at Open Source AI and Bioter­ror­ism Risk

1a3ornNov 2, 2023, 6:20 PM
193 points
79 comments23 min readLW link

Ways I Ex­pect AI Reg­u­la­tion To In­crease Ex­tinc­tion Risk

1a3ornJul 4, 2023, 5:32 PM
225 points
32 comments7 min readLW link

Yud­kowsky vs Han­son on FOOM: Whose Pre­dic­tions Were Bet­ter?

1a3ornJun 1, 2023, 7:36 PM
137 points
76 comments24 min readLW link2 reviews

Gi­ant (In)scrutable Ma­tri­ces: (Maybe) the Best of All Pos­si­ble Worlds

1a3ornApr 4, 2023, 5:39 PM
208 points
38 comments5 min readLW link1 review

[Question] What is a good com­pre­hen­sive ex­am­i­na­tion of risks near the Ohio train de­rail­ment?

1a3ornMar 16, 2023, 12:21 AM
17 points
0 comments1 min readLW link

Pa­ram­e­ter Scal­ing Comes for RL, Maybe

1a3ornJan 24, 2023, 1:55 PM
100 points
3 comments14 min readLW link

“A Gen­er­al­ist Agent”: New Deep­Mind Publication

1a3ornMay 12, 2022, 3:30 PM
79 points
43 comments1 min readLW link

New Scal­ing Laws for Large Lan­guage Models

1a3ornApr 1, 2022, 8:41 PM
246 points
22 comments5 min readLW link

Effi­cien­tZero: How It Works

1a3ornNov 26, 2021, 3:17 PM
298 points
50 comments29 min readLW link1 review

Jit­ters No Ev­i­dence of Stu­pidity in RL

1a3ornSep 16, 2021, 10:43 PM
96 points
18 comments3 min readLW link

How Deep­Mind’s Gen­er­ally Ca­pable Agents Were Trained

1a3ornAug 20, 2021, 6:52 PM
87 points
6 comments19 min readLW link

Coase’s “Na­ture of the Firm” on Polyamory

1a3ornAug 13, 2021, 1:15 PM
102 points
34 comments1 min readLW link2 reviews

Pro­mot­ing Pre­dic­tion Mar­kets With Mean­ingless In­ter­net-Point Badges

1a3ornFeb 8, 2021, 7:03 PM
59 points
21 comments2 min readLW link