Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Matthew_Opitz
Karma:
382
All
Posts
Comments
New
Top
Old
Proxi-Antipodes: A Geometrical Intuition For The Difficulty Of Aligning AI With Multitudinous Human Values
Matthew_Opitz
9 Jun 2023 21:21 UTC
7
points
0
comments
5
min read
LW
link
DELBERTing as an Adversarial Strategy
Matthew_Opitz
12 May 2023 20:09 UTC
8
points
3
comments
5
min read
LW
link
The Academic Field Pyramid—any point to encouraging broad but shallow AI risk engagement?
Matthew_Opitz
11 May 2023 1:32 UTC
20
points
1
comment
6
min read
LW
link
Even if human & AI alignment are just as easy, we are screwed
Matthew_Opitz
13 Apr 2023 17:32 UTC
35
points
5
comments
5
min read
LW
link
Bing AI Generating Voynich Manuscript Continuations—It does not know how it knows
Matthew_Opitz
10 Apr 2023 20:22 UTC
15
points
6
comments
13
min read
LW
link
Matthew_Opitz’s Shortform
Matthew_Opitz
5 Apr 2023 19:42 UTC
3
points
2
comments
1
min read
LW
link
“NRx” vs. “Prog” Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1)
Matthew_Opitz
4 Sep 2014 16:58 UTC
5
points
340
comments
11
min read
LW
link
Back to top