Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
False Name
Karma:
−82
All
Posts
Comments
New
Top
Old
Mercy to the Machine: Thoughts & Rights
False Name
Apr 27, 2024, 4:36 PM
7
points
6
comments
17
min read
LW
link
[Question]
What’s Your Best AI Safety “Quip”?
False Name
Mar 26, 2024, 3:35 PM
−2
points
0
comments
1
min read
LW
link
Impossibility of Anthropocentric-Alignment
False Name
Feb 24, 2024, 6:31 PM
−8
points
2
comments
39
min read
LW
link
A Challenge to Effective Altruism’s Premises
False Name
Jan 6, 2024, 6:46 PM
−26
points
3
comments
3
min read
LW
link
Worldwork for Ethics
False Name
Oct 17, 2023, 6:55 PM
8
points
1
comment
24
min read
LW
link
Introspective Bayes
False Name
May 27, 2023, 7:35 PM
−3
points
2
comments
16
min read
LW
link
What about an AI that’s SUPPOSED to kill us (not ChaosGPT; only on paper)?
False Name
Apr 11, 2023, 4:09 PM
−13
points
1
comment
3
min read
LW
link
Contra-Berkeley
False Name
Apr 11, 2023, 4:06 PM
0
points
0
comments
4
min read
LW
link
Contra-Wittgenstein; no postmodernism
False Name
Apr 11, 2023, 4:05 PM
−17
points
1
comment
5
min read
LW
link
Two Reasons for no Utilitarianism
False Name
Feb 25, 2023, 7:51 PM
−4
points
3
comments
3
min read
LW
link
What “upside” of AI?
False Name
Dec 30, 2022, 8:58 PM
0
points
5
comments
4
min read
LW
link
Crypto-currency as pro-alignment mechanism
False Name
Dec 27, 2022, 5:45 PM
−10
points
2
comments
2
min read
LW
link
Contrary to List of Lethality’s point 22, alignment’s door number 2
False Name
Dec 14, 2022, 10:01 PM
−2
points
5
comments
22
min read
LW
link
Kolmogorov Complexity and Simulation Hypothesis
False Name
Dec 14, 2022, 10:01 PM
−3
points
0
comments
7
min read
LW
link
Back to top