RSS

Fu­ture Fund Wor­ld­view Prize

TagLast edit: Sep 23, 2022, 11:17 PM by interstice

Why I think strong gen­eral AI is com­ing soon

porbySep 28, 2022, 5:40 AM
337 points
141 comments34 min readLW link1 review

Coun­ter­ar­gu­ments to the ba­sic AI x-risk case

KatjaGraceOct 14, 2022, 1:00 PM
371 points
124 comments34 min readLW link1 review
(aiimpacts.org)

What does it take to defend the world against out-of-con­trol AGIs?

Steven ByrnesOct 25, 2022, 2:47 PM
208 points
49 comments30 min readLW link1 review

AI will change the world, but won’t take it over by play­ing “3-di­men­sional chess”.

Nov 22, 2022, 6:57 PM
134 points
97 comments24 min readLW link

AI Timelines via Cu­mu­la­tive Op­ti­miza­tion Power: Less Long, More Short

jacob_cannellOct 6, 2022, 12:21 AM
138 points
33 comments6 min readLW link

How to Train Your AGI Dragon

Eris DiscordiaSep 21, 2022, 10:28 PM
−1 points
3 comments5 min readLW link

You are Un­der­es­ti­mat­ing The Like­li­hood That Con­ver­gent In­stru­men­tal Sub­goals Lead to Aligned AGI

Mark NeyerSep 26, 2022, 2:22 PM
3 points
6 comments3 min readLW link

Loss of Align­ment is not the High-Order Bit for AI Risk

yieldthoughtSep 26, 2022, 9:16 PM
14 points
18 comments2 min readLW link

Will Values and Com­pe­ti­tion De­cou­ple?

intersticeSep 28, 2022, 4:27 PM
15 points
11 comments17 min readLW link

AGI by 2050 prob­a­bil­ity less than 1%

fuminOct 1, 2022, 7:45 PM
−10 points
4 comments9 min readLW link
(docs.google.com)

Frontline of AGI Align­ment

SD MarlowOct 4, 2022, 3:47 AM
−10 points
0 comments1 min readLW link
(robothouse.substack.com)

Char­i­ta­ble Reads of Anti-AGI-X-Risk Ar­gu­ments, Part 1

sstichOct 5, 2022, 5:03 AM
3 points
4 comments3 min readLW link

The prob­a­bil­ity that Ar­tifi­cial Gen­eral In­tel­li­gence will be de­vel­oped by 2043 is ex­tremely low.

cveresOct 6, 2022, 6:05 PM
−13 points
8 comments1 min readLW link

The Le­bowski The­o­rem — Char­i­ta­ble Reads of Anti-AGI-X-Risk Ar­gu­ments, Part 2

sstichOct 8, 2022, 10:39 PM
1 point
10 comments7 min readLW link

Don’t ex­pect AGI any­time soon

cveresOct 10, 2022, 10:38 PM
−14 points
6 comments1 min readLW link

Up­dates and Clarifications

SD MarlowOct 11, 2022, 5:34 AM
−5 points
1 comment1 min readLW link

My ar­gu­ment against AGI

cveresOct 12, 2022, 6:33 AM
7 points
5 comments1 min readLW link

A strange twist on the road to AGI

cveresOct 12, 2022, 11:27 PM
−8 points
0 comments1 min readLW link

“AGI soon, but Nar­row works Bet­ter”

AnthonyRepettoOct 14, 2022, 9:35 PM
1 point
9 comments2 min readLW link

All life’s helpers’ beliefs

TehdastehdasOct 28, 2022, 5:47 AM
−12 points
1 comment5 min readLW link

“Origi­nal­ity is noth­ing but ju­di­cious imi­ta­tion”—Voltaire

VestoziaOct 23, 2022, 7:00 PM
0 points
0 comments13 min readLW link

AGI in our life­times is wish­ful thinking

niknobleOct 24, 2022, 11:53 AM
1 point
25 comments8 min readLW link

Why some peo­ple be­lieve in AGI, but I don’t.

cveresOct 26, 2022, 3:09 AM
−15 points
6 comments1 min readLW link

Wor­ld­view iPeo­ple—Fu­ture Fund’s AI Wor­ld­view Prize

Toni MUENDELOct 28, 2022, 1:53 AM
−22 points
4 comments9 min readLW link

AI as a Civ­i­liza­tional Risk Part 1/​6: His­tor­i­cal Priors

PashaKamyshevOct 29, 2022, 9:59 PM
2 points
2 comments7 min readLW link

AI as a Civ­i­liza­tional Risk Part 2/​6: Be­hav­ioral Modification

PashaKamyshevOct 30, 2022, 4:57 PM
9 points
0 comments10 min readLW link

AI as a Civ­i­liza­tional Risk Part 3/​6: Anti-econ­omy and Sig­nal Pollution

PashaKamyshevOct 31, 2022, 5:03 PM
7 points
4 comments14 min readLW link

AI as a Civ­i­liza­tional Risk Part 4/​6: Bioweapons and Philos­o­phy of Modification

PashaKamyshevNov 1, 2022, 8:50 PM
7 points
1 comment8 min readLW link

AI as a Civ­i­liza­tional Risk Part 5/​6: Re­la­tion­ship be­tween C-risk and X-risk

PashaKamyshevNov 3, 2022, 2:19 AM
2 points
0 comments7 min readLW link

AI as a Civ­i­liza­tional Risk Part 6/​6: What can be done

PashaKamyshevNov 3, 2022, 7:48 PM
2 points
4 comments4 min readLW link

Why do we post our AI safety plans on the In­ter­net?

Peter S. ParkNov 3, 2022, 4:02 PM
4 points
4 comments11 min readLW link

Re­view of the Challenge

SD MarlowNov 5, 2022, 6:38 AM
−14 points
5 comments2 min readLW link

When can a mimic sur­prise you? Why gen­er­a­tive mod­els han­dle seem­ingly ill-posed problems

David JohnstonNov 5, 2022, 1:19 PM
8 points
4 comments16 min readLW link

Loss of con­trol of AI is not a likely source of AI x-risk

squekNov 7, 2022, 6:44 PM
−6 points
0 comments5 min readLW link

How likely are ma­lign pri­ors over ob­jec­tives? [aborted WIP]

David JohnstonNov 11, 2022, 5:36 AM
−1 points
0 comments8 min readLW link

AGI Im­pos­si­ble due to En­ergy Constrains

TheKlausNov 30, 2022, 6:48 PM
−11 points
13 comments1 min readLW link

A Fal­li­bil­ist Wordview

Toni MUENDELDec 7, 2022, 8:59 PM
−13 points
2 comments13 min readLW link

AGI is here, but no­body wants it. Why should we even care?

MGowDec 20, 2022, 7:14 PM
−22 points
0 comments17 min readLW link

Is­sues with un­even AI re­source distribution

User_LukeDec 24, 2022, 1:18 AM
3 points
9 comments5 min readLW link
(temporal.substack.com)

Trans­for­ma­tive AGI by 2043 is <1% likely

Ted SandersJun 6, 2023, 5:36 PM
33 points
117 comments5 min readLW link
(arxiv.org)

AI coöper­a­tion is more pos­si­ble than you think

423175Sep 24, 2022, 9:26 PM
7 points
0 comments2 min readLW link

“Cot­ton Gin” AI Risk

423175Sep 24, 2022, 9:26 PM
7 points
3 comments2 min readLW link

P(mis­al­ign­ment x-risk|AGI) is small #[Fu­ture Fund wor­ld­view prize]

Dibbu DibbuSep 24, 2022, 11:54 PM
−18 points
0 comments4 min readLW link