RSS

Hu­man Alignment

TagLast edit: 6 Dec 2022 23:02 UTC by Jordan Arel

Human alignment is a state of humanity in which most or all of humanity systematically cooperates to achieve positive-sum outcomes for everyone (or at a minimum are prevented from pursuing negative sum outcomes), in a way perpetually sustainable into the future. Such a state of human alignment may be necessary to prevent an existential catastrophe in the case that the “Vulnerable World Hypothesis” is correct.

3. Uploading

RogerDearnaley23 Nov 2023 7:39 UTC
21 points
5 comments8 min readLW link

[Question] What’s the best way to stream­line two-party sale ne­go­ti­a­tions be­tween real hu­mans?

Isaac King19 May 2023 23:30 UTC
15 points
21 comments1 min readLW link

Open-ended ethics of phe­nom­ena (a desider­ata with uni­ver­sal moral­ity)

Ryo 8 Nov 2023 20:10 UTC
1 point
0 comments8 min readLW link

Paradigm-build­ing from first prin­ci­ples: Effec­tive al­tru­ism, AGI, and alignment

Cameron Berg8 Feb 2022 16:12 UTC
29 points
5 comments14 min readLW link

How to re­spond to the re­cent con­dem­na­tions of the ra­tio­nal­ist community

Christopher King4 Apr 2023 1:42 UTC
−2 points
7 comments4 min readLW link

The case for “Gen­er­ous Tit for Tat” as the ul­ti­mate game the­ory strategy

positivesum9 Nov 2023 18:41 UTC
2 points
3 comments8 min readLW link
(tryingtruly.substack.com)

​​ Open-ended/​Phenom­e­nal ​Ethics ​(TLDR)

Ryo 9 Nov 2023 16:58 UTC
3 points
0 comments1 min readLW link

Great Em­pa­thy and Great Re­sponse Ability

positivesum13 Nov 2023 23:04 UTC
16 points
0 comments3 min readLW link
(tryingtruly.substack.com)

How “Pinky Promise” diplo­macy once stopped a war in the Mid­dle East

positivesum22 Nov 2023 12:03 UTC
15 points
9 comments1 min readLW link
(tryingtruly.substack.com)

How Microsoft’s ruth­less em­ployee eval­u­a­tion sys­tem an­nihilated team col­lab­o­ra­tion.

positivesum25 Nov 2023 13:25 UTC
3 points
2 comments1 min readLW link
(tryingtruly.substack.com)

Hu­man­ity Align­ment Theory

Hubert Ulmanski17 May 2023 18:32 UTC
1 point
0 comments7 min readLW link

AI Safety in a Vuln­er­a­ble World: Re­quest­ing Feed­back on Pre­limi­nary Thoughts

Jordan Arel6 Dec 2022 22:35 UTC
4 points
2 comments3 min readLW link

An­tag­o­nis­tic AI

Xybermancer1 Mar 2024 18:50 UTC
−8 points
1 comment1 min readLW link

How to Pro­mote More Pro­duc­tive Dialogue Out­side of LessWrong

sweenesm15 Jan 2024 14:16 UTC
16 points
4 comments2 min readLW link
No comments.