RSS

Rob Bensinger

Karma: 22,010

Communications @ MIRI. Unless otherwise indicated, my posts and comments here reflect my own views, and not necessarily my employer’s. (Though we agree about an awful lot.)

MIRI’s 2024 End-of-Year Update

Rob BensingerDec 3, 2024, 4:33 AM
98 points
2 comments4 min readLW link

Re­sponse to Aschen­bren­ner’s “Si­tu­a­tional Aware­ness”

Rob BensingerJun 6, 2024, 10:57 PM
194 points
27 comments3 min readLW link

When is a mind me?

Rob BensingerApr 17, 2024, 5:56 AM
144 points
130 comments15 min readLW link

AI Views Snapshots

Rob BensingerDec 13, 2023, 12:45 AM
142 points
61 comments1 min readLW link

An ar­tifi­cially struc­tured ar­gu­ment for ex­pect­ing AGI ruin

Rob BensingerMay 7, 2023, 9:52 PM
91 points
26 comments19 min readLW link

AGI ruin mostly rests on strong claims about al­ign­ment and de­ploy­ment, not about society

Rob BensingerApr 24, 2023, 1:06 PM
70 points
8 comments6 min readLW link

The ba­sic rea­sons I ex­pect AGI ruin

Rob BensingerApr 18, 2023, 3:37 AM
189 points
73 comments14 min readLW link

Four mind­set dis­agree­ments be­hind ex­is­ten­tial risk dis­agree­ments in ML

Rob BensingerApr 11, 2023, 4:53 AM
136 points
12 comments1 min readLW link

Yud­kowsky on AGI risk on the Ban­kless podcast

Rob BensingerMar 13, 2023, 12:42 AM
83 points
5 comments1 min readLW link

Ele­ments of Ra­tion­al­ist Discourse

Rob BensingerFeb 12, 2023, 7:58 AM
224 points
49 comments3 min readLW link1 review

Thoughts on AGI or­ga­ni­za­tions and ca­pa­bil­ities work

Dec 7, 2022, 7:46 PM
102 points
17 comments5 min readLW link

A challenge for AGI or­ga­ni­za­tions, and a challenge for readers

Dec 1, 2022, 11:11 PM
302 points
33 comments2 min readLW link

A com­mon failure for foxes

Rob BensingerOct 14, 2022, 10:50 PM
47 points
7 comments2 min readLW link

ITT-pass­ing and ci­vil­ity are good; “char­ity” is bad; steel­man­ning is niche

Rob BensingerJul 5, 2022, 12:15 AM
163 points
36 comments6 min readLW link1 review

The in­or­di­nately slow spread of good AGI con­ver­sa­tions in ML

Rob BensingerJun 21, 2022, 4:09 PM
173 points
62 comments8 min readLW link

On sav­ing one’s world

Rob BensingerMay 17, 2022, 7:53 PM
192 points
4 comments1 min readLW link

Late 2021 MIRI Con­ver­sa­tions: AMA /​ Discussion

Rob BensingerFeb 28, 2022, 8:03 PM
119 points
199 comments1 min readLW link

An­i­mal welfare EA and per­sonal dietary options

Rob BensingerJan 5, 2022, 6:53 PM
38 points
32 comments3 min readLW link

Some ab­stract, non-tech­ni­cal rea­sons to be non-max­i­mally-pes­simistic about AI alignment

Rob BensingerDec 12, 2021, 2:08 AM
70 points
35 comments7 min readLW link

Con­ver­sa­tion on tech­nol­ogy fore­cast­ing and gradualism

Dec 9, 2021, 9:23 PM
108 points
30 comments31 min readLW link