RSS

Mul­tipo­lar Scenarios

TagLast edit: Dec 30, 2024, 10:41 AM by Dakara

Multipolar scenario is a scenario where no single AI or agent takes over the world.

It is featured in the book “Superintelligence” by Nick Bostrom.

What Mul­tipo­lar Failure Looks Like, and Ro­bust Agent-Ag­nos­tic Pro­cesses (RAAPs)

Andrew_CritchMar 31, 2021, 11:50 PM
282 points
65 comments22 min readLW link1 review

Why multi-agent safety is im­por­tant

Akbir KhanJun 14, 2022, 9:23 AM
10 points
2 comments10 min readLW link

Su­per­in­tel­li­gence 17: Mul­tipo­lar scenarios

KatjaGraceJan 6, 2015, 6:44 AM
9 points
38 comments6 min readLW link

Equil­ibrium and prior se­lec­tion prob­lems in mul­ti­po­lar deployment

JesseCliftonApr 2, 2020, 8:06 PM
21 points
11 comments10 min readLW link

What Failure Looks Like: Distill­ing the Discussion

Ben PaceJul 29, 2020, 9:49 PM
82 points
14 comments7 min readLW link

Achiev­ing AI Align­ment through De­liber­ate Uncer­tainty in Mul­ti­a­gent Systems

Florian_DietzFeb 17, 2024, 8:45 AM
4 points
0 comments13 min readLW link

Com­mit­ment and cred­i­bil­ity in mul­ti­po­lar AI scenarios

anni_leskelaDec 4, 2020, 6:48 PM
31 points
3 comments18 min readLW link

60+ Pos­si­ble Futures

Bart BussmannJun 26, 2023, 9:16 AM
93 points
18 comments11 min readLW link

The Choice Transition

Nov 18, 2024, 12:30 PM
50 points
4 comments15 min readLW link
(strangecities.substack.com)

A Com­mon-Sense Case For Mu­tu­ally-Misal­igned AGIs Ally­ing Against Humans

Thane RuthenisDec 17, 2023, 8:28 PM
29 points
7 comments11 min readLW link

[Question] In a mul­ti­po­lar sce­nario, how do peo­ple ex­pect sys­tems to be trained to in­ter­act with sys­tems de­vel­oped by other labs?

JesseCliftonDec 1, 2020, 8:04 PM
14 points
6 comments1 min readLW link

[Question] How would two su­per­in­tel­li­gent AIs in­ter­act, if they are un­al­igned with each other?

Nathan1123Aug 9, 2022, 6:58 PM
4 points
6 comments1 min readLW link

Tra­jec­to­ries to 2036

ukc10014Oct 20, 2022, 8:23 PM
3 points
1 comment14 min readLW link

Nine Points of Col­lec­tive Insanity

Dec 27, 2022, 3:14 AM
−2 points
3 comments1 min readLW link
(mflb.com)

Align­ment is not enough

Alan ChanJan 12, 2023, 12:33 AM
12 points
6 comments11 min readLW link
(coordination.substack.com)

The Align­ment Problems

Martín SotoJan 12, 2023, 10:29 PM
20 points
0 comments4 min readLW link

Dar­wi­nian Traps and Ex­is­ten­tial Risks

KristianRonnAug 25, 2024, 10:37 PM
80 points
14 comments10 min readLW link

The Frag­ility of Life Hy­poth­e­sis and the Evolu­tion of Cooperation

KristianRonnSep 4, 2024, 9:04 PM
50 points
6 comments11 min readLW link

The need for multi-agent experiments

Martín SotoAug 1, 2024, 5:14 PM
43 points
3 comments9 min readLW link

Co­op­er­a­tion and Align­ment in Del­e­ga­tion Games: You Need Both!

Aug 3, 2024, 10:16 AM
8 points
0 comments14 min readLW link
(www.oliversourbut.net)

The Stag Hunt—cul­ti­vat­ing co­op­er­a­tion to reap rewards

James Stephen BrownFeb 25, 2025, 11:45 PM
7 points
0 comments4 min readLW link
(nonzerosum.games)

[Question] How can hu­man­ity sur­vive a mul­ti­po­lar AGI sce­nario?

Leonard HollowayJan 9, 2025, 8:17 PM
13 points
8 comments2 min readLW link

Agen­tized LLMs will change the al­ign­ment landscape

Seth HerdApr 9, 2023, 2:29 AM
160 points
102 comments3 min readLW link1 review

AI x-risk, ap­prox­i­mately or­dered by embarrassment

Alex Lawsen Apr 12, 2023, 11:01 PM
151 points
7 comments19 min readLW link

Ca­pa­bil­ities and al­ign­ment of LLM cog­ni­tive architectures

Seth HerdApr 18, 2023, 4:29 PM
86 points
18 comments20 min readLW link
No comments.