RSS

Cen­ter on Long-Term Risk (CLR)

TagLast edit: Dec 8, 2023, 10:35 PM by Jonas V

The Center on Long-Term Risk, formerly Foundational Research Institute, is a research group that investigates cooperative strategies to reduce risks of astronomical suffering (s-risks). This includes not only (post-)human suffering, but also potential digital sentience. Their research is interdisciplinary, drawing on insights from artificial intelligence, anthropic reasoning, international relations, philosophy, and other fields. Its research agenda focuses on encouraging cooperative behavior in and avoiding conflict between transformative AI systems.

See also

External links

Sec­tion 7: Foun­da­tions of Ra­tional Agency

JesseCliftonDec 22, 2019, 2:05 AM
14 points
4 comments8 min readLW link

Pre­face to CLR’s Re­search Agenda on Co­op­er­a­tion, Con­flict, and TAI

JesseCliftonDec 13, 2019, 9:02 PM
62 points
10 comments2 min readLW link

Sec­tions 1 & 2: In­tro­duc­tion, Strat­egy and Governance

JesseCliftonDec 17, 2019, 9:27 PM
35 points
8 comments14 min readLW link

Sec­tions 5 & 6: Con­tem­po­rary Ar­chi­tec­tures, Hu­mans in the Loop

JesseCliftonDec 20, 2019, 3:52 AM
27 points
4 comments10 min readLW link

Sec­tions 3 & 4: Cred­i­bil­ity, Peace­ful Bar­gain­ing Mechanisms

JesseCliftonDec 17, 2019, 9:46 PM
20 points
2 comments12 min readLW link

Mul­ti­verse-wide Co­op­er­a­tion via Cor­re­lated De­ci­sion Making

Kaj_SotalaAug 20, 2017, 12:01 PM
5 points
2 comments1 min readLW link
(foundational-research.org)

Against GDP as a met­ric for timelines and take­off speeds

Daniel KokotajloDec 29, 2020, 5:42 PM
140 points
19 comments14 min readLW link1 review

Mak­ing AIs less likely to be spiteful

Sep 26, 2023, 2:12 PM
116 points
4 comments10 min readLW link

Birds, Brains, Planes, and AI: Against Ap­peals to the Com­plex­ity/​Mys­te­ri­ous­ness/​Effi­ciency of the Brain

Daniel KokotajloJan 18, 2021, 12:08 PM
194 points
86 comments13 min readLW link1 review

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 19, 2019, 3:00 AM
130 points
18 comments62 min readLW link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 18, 2018, 4:46 AM
190 points
26 comments62 min readLW link1 review

[Question] Like­li­hood of hy­per­ex­is­ten­tial catas­tro­phe from a bug?

AnirandisJun 18, 2020, 4:23 PM
14 points
27 comments1 min readLW link

Re­sponses to ap­par­ent ra­tio­nal­ist con­fu­sions about game /​ de­ci­sion theory

Anthony DiGiovanniAug 30, 2023, 10:02 PM
142 points
20 comments12 min readLW link1 review

In­di­vi­d­u­ally in­cen­tivized safe Pareto im­prove­ments in open-source bargaining

Jul 17, 2024, 6:26 PM
41 points
2 comments17 min readLW link

[Question] (Cross­post) Ask­ing for on­line calls on AI s-risks dis­cus­sions

jackchang110May 15, 2023, 5:42 PM
1 point
0 comments1 min readLW link
(forum.effectivealtruism.org)

CLR’s re­cent work on multi-agent systems

JesseCliftonMar 9, 2021, 2:28 AM
54 points
2 comments13 min readLW link

For­mal­iz­ing Ob­jec­tions against Sur­ro­gate Goals

VojtaKovarikSep 2, 2021, 4:24 PM
16 points
23 comments1 min readLW link

When does tech­ni­cal work to re­duce AGI con­flict make a differ­ence?: Introduction

Sep 14, 2022, 7:38 PM
52 points
3 comments6 min readLW link

When would AGIs en­gage in con­flict?

Sep 14, 2022, 7:38 PM
52 points
5 comments13 min readLW link

When is in­tent al­ign­ment suffi­cient or nec­es­sary to re­duce AGI con­flict?

Sep 14, 2022, 7:39 PM
40 points
0 comments9 min readLW link

Open-minded updatelessness

Jul 10, 2023, 11:08 AM
65 points
21 comments12 min readLW link
No comments.