RSS

JakubK

Karma: 398

Avert­ing Catas­tro­phe: De­ci­sion The­ory for COVID-19, Cli­mate Change, and Po­ten­tial Disasters of All Kinds

JakubKMay 2, 2023, 10:50 PM
10 points
0 commentsLW link

Notes on “the hot mess the­ory of AI mis­al­ign­ment”

JakubKApr 21, 2023, 10:07 AM
16 points
0 comments5 min readLW link
(sohl-dickstein.github.io)

GPT-4 solves Gary Mar­cus-in­duced flubs

JakubKMar 17, 2023, 6:40 AM
56 points
29 comments2 min readLW link
(docs.google.com)

Next steps af­ter AGISF at UMich

JakubKJan 25, 2023, 8:57 PM
10 points
0 comments5 min readLW link
(docs.google.com)

List of tech­ni­cal AI safety ex­er­cises and projects

JakubKJan 19, 2023, 9:35 AM
41 points
5 comments1 min readLW link
(docs.google.com)

6-para­graph AI risk in­tro for MAISI

JakubKJan 19, 2023, 9:22 AM
11 points
0 comments2 min readLW link
(www.maisi.club)

Big list of AI safety videos

JakubKJan 9, 2023, 6:12 AM
11 points
2 comments1 min readLW link
(docs.google.com)

Sum­mary of 80k’s AI prob­lem profile

JakubKJan 1, 2023, 7:30 AM
7 points
0 comments5 min readLW link
(forum.effectivealtruism.org)

New AI risk in­tro from Vox [link post]

JakubKDec 21, 2022, 6:00 AM
5 points
1 comment2 min readLW link
(www.vox.com)

[Question] Best in­tro­duc­tory overviews of AGI safety?

JakubKDec 13, 2022, 7:01 PM
21 points
9 comments2 min readLW link
(forum.effectivealtruism.org)

[Question] Can we get full au­dio for Eliezer’s con­ver­sa­tion with Sam Har­ris?

JakubKAug 7, 2022, 8:35 PM
30 points
8 comments1 min readLW link