RSS

Hu­man Alignment

TagLast edit: Dec 6, 2022, 11:02 PM by Jordan Arel

Human alignment is a state of humanity in which most or all of humanity systematically cooperates to achieve positive-sum outcomes for everyone (or at a minimum are prevented from pursuing negative sum outcomes), in a way perpetually sustainable into the future. Such a state of human alignment may be necessary to prevent an existential catastrophe in the case that the “Vulnerable World Hypothesis” is correct.

3. Uploading

RogerDearnaleyNov 23, 2023, 7:39 AM
21 points
5 comments8 min readLW link

[Question] What’s the best way to stream­line two-party sale ne­go­ti­a­tions be­tween real hu­mans?

Isaac KingMay 19, 2023, 11:30 PM
15 points
21 comments1 min readLW link

Open-ended ethics of phe­nom­ena (a desider­ata with uni­ver­sal moral­ity)

Ryo Nov 8, 2023, 8:10 PM
1 point
0 comments8 min readLW link

Paradigm-build­ing from first prin­ci­ples: Effec­tive al­tru­ism, AGI, and alignment

Cameron BergFeb 8, 2022, 4:12 PM
29 points
5 comments14 min readLW link

Tether­ware #1: The case for hu­man­like AI with free will

Jáchym FibírJan 30, 2025, 10:58 AM
5 points
10 comments10 min readLW link
(tetherware.substack.com)

How to re­spond to the re­cent con­dem­na­tions of the ra­tio­nal­ist community

Christopher KingApr 4, 2023, 1:42 AM
−2 points
7 comments4 min readLW link

The case for “Gen­er­ous Tit for Tat” as the ul­ti­mate game the­ory strategy

positivesumNov 9, 2023, 6:41 PM
2 points
3 comments8 min readLW link
(tryingtruly.substack.com)

​​ Open-ended/​Phenom­e­nal ​Ethics ​(TLDR)

Ryo Nov 9, 2023, 4:58 PM
3 points
0 comments1 min readLW link

Great Em­pa­thy and Great Re­sponse Ability

positivesumNov 13, 2023, 11:04 PM
16 points
0 comments3 min readLW link
(tryingtruly.substack.com)

How “Pinky Promise” diplo­macy once stopped a war in the Mid­dle East

positivesumNov 22, 2023, 12:03 PM
15 points
9 comments1 min readLW link
(tryingtruly.substack.com)

How Microsoft’s ruth­less em­ployee eval­u­a­tion sys­tem an­nihilated team col­lab­o­ra­tion.

positivesumNov 25, 2023, 1:25 PM
3 points
2 comments1 min readLW link
(tryingtruly.substack.com)

Hu­man­ity Align­ment Theory

Hubert UlmanskiMay 17, 2023, 6:32 PM
1 point
0 comments7 min readLW link

AI Safety in a Vuln­er­a­ble World: Re­quest­ing Feed­back on Pre­limi­nary Thoughts

Jordan ArelDec 6, 2022, 10:35 PM
4 points
2 comments3 min readLW link

An­tag­o­nis­tic AI

XybermancerMar 1, 2024, 6:50 PM
−8 points
1 comment1 min readLW link

How to Pro­mote More Pro­duc­tive Dialogue Out­side of LessWrong

sweenesmJan 15, 2024, 2:16 PM
18 points
4 comments2 min readLW link
No comments.