Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Page
1
[Question]
Near-mode cryonics: A thought experiment
Mati_Roy
9 Apr 2023 22:21 UTC
3
points
2
comments
1
min read
LW
link
3 entirely different things we call “Time”
Vasyl Dotsenko
9 Apr 2023 20:08 UTC
0
points
6
comments
2
min read
LW
link
(medium.com)
Foom seems unlikely in the current LLM training paradigm
Ocracoke
9 Apr 2023 19:41 UTC
18
points
9
comments
1
min read
LW
link
What Piles Up Must Pile Down
silentbob
9 Apr 2023 18:37 UTC
35
points
4
comments
6
min read
LW
link
[Question]
What games are using the concept of a Schelling point?
Mati_Roy
9 Apr 2023 17:21 UTC
9
points
13
comments
1
min read
LW
link
[Question]
Review & rebuttal of “Why machines will never rule the world: artificial intelligence without fear”
mikbp
9 Apr 2023 15:06 UTC
4
points
4
comments
1
min read
LW
link
Being at peace with Doom
Johannes C. Mayer
9 Apr 2023 14:53 UTC
24
points
13
comments
4
min read
LW
link
Expanding the domain of discourse reveals structure already there but hidden
TsviBT
9 Apr 2023 13:36 UTC
30
points
4
comments
6
min read
LW
link
Rooms Available in Downtown Berkeley Group House
Arjun Panickssery
9 Apr 2023 10:15 UTC
3
points
0
comments
1
min read
LW
link
AGI Safety Fundamentals 2023 Notes
Lisa Wang
9 Apr 2023 7:28 UTC
3
points
0
comments
1
min read
LW
link
(lisacontemplates.blogspot.com)
Ng and LeCun on the 6-Month Pause (Transcript)
Stephen Fowler
9 Apr 2023 6:14 UTC
29
points
7
comments
16
min read
LW
link
Agentized LLMs will change the alignment landscape
Seth Herd
9 Apr 2023 2:29 UTC
157
points
97
comments
3
min read
LW
link
A decade of lurking, a month of posting
Max H
9 Apr 2023 0:21 UTC
70
points
4
comments
5
min read
LW
link
[Question]
Is there a fundamental distinction between simulating a mind and simulating *being* a mind? Is this a useful and important distinction?
Thoth Hermes
8 Apr 2023 23:44 UTC
−17
points
8
comments
2
min read
LW
link
“warning about ai doom” is also “announcing capabilities progress to noobs”
the gears to ascension
8 Apr 2023 23:42 UTC
23
points
5
comments
3
min read
LW
link
Feature Request: Right Click to Copy LaTeX
DragonGod
8 Apr 2023 23:27 UTC
18
points
4
comments
1
min read
LW
link
ELCK might require nontrivial scalable alignment progress, and seems tractable enough to try
Alex Lawsen
8 Apr 2023 21:49 UTC
17
points
0
comments
2
min read
LW
link
GPTs are Predictors, not Imitators
Eliezer Yudkowsky
8 Apr 2023 19:59 UTC
403
points
91
comments
3
min read
LW
link
4 generations of alignment
qbolec
8 Apr 2023 19:59 UTC
1
point
0
comments
3
min read
LW
link
The surprising parameter efficiency of vision models
beren
8 Apr 2023 19:44 UTC
77
points
28
comments
4
min read
LW
link
Random Observation on AI goals
FTPickle
8 Apr 2023 19:28 UTC
−11
points
2
comments
1
min read
LW
link
Can we evaluate the “tool versus agent” AGI prediction?
Xodarap
8 Apr 2023 18:40 UTC
16
points
7
comments
1
min read
LW
link
Relative Abstracted Agency
Audere
8 Apr 2023 16:57 UTC
14
points
6
comments
5
min read
LW
link
The benevolence of the butcher
dr_s
8 Apr 2023 16:29 UTC
72
points
30
comments
6
min read
LW
link
SERI MATS—Summer 2023 Cohort
Aris
,
Ryan Kidd
and
Christian Smith
8 Apr 2023 15:32 UTC
71
points
25
comments
4
min read
LW
link
AI Proposals at ‘Two Sessions’: AGI as ‘Two Bombs, One Satellite’?
Derek M. Jones
8 Apr 2023 11:31 UTC
5
points
0
comments
1
min read
LW
link
(www.chinatalk.media)
All images from the WaitButWhy sequence on AI
trevor
8 Apr 2023 7:36 UTC
73
points
5
comments
2
min read
LW
link
Guidelines for productive discussions
ambigram
8 Apr 2023 6:00 UTC
37
points
0
comments
5
min read
LW
link
All AGI Safety questions welcome (especially basic ones) [April 2023]
steven0461
8 Apr 2023 4:21 UTC
57
points
88
comments
2
min read
LW
link
Bringing Agency Into AGI Extinction Is Superfluous
George3d6
8 Apr 2023 4:02 UTC
28
points
18
comments
5
min read
LW
link
Lagos, Nigeria—ACX Meetups Everywhere 2023
damola
8 Apr 2023 3:55 UTC
1
point
0
comments
1
min read
LW
link
Upcoming Changes in Large Language Models
Andrew Keenan Richardson
8 Apr 2023 3:41 UTC
43
points
8
comments
4
min read
LW
link
(mechanisticmind.com)
Consider The Hand Axe
ymeskhout
8 Apr 2023 1:31 UTC
142
points
16
comments
6
min read
LW
link
AGI as a new data point
Will Rodgers
8 Apr 2023 1:01 UTC
−1
points
0
comments
1
min read
LW
link
Parametrize Priority Evaluations
SilverFlame
8 Apr 2023 0:39 UTC
2
points
2
comments
6
min read
LW
link
Pausing AI Developments Isn’t Enough. We Need to Shut it All Down
Eliezer Yudkowsky
8 Apr 2023 0:36 UTC
253
points
40
comments
12
min read
LW
link
Humanitarian Phase Transition needed before Technological Singularity
Dr_What
7 Apr 2023 23:17 UTC
−9
points
5
comments
2
min read
LW
link
[Question]
Thoughts about Hugging Face?
Ariel Kwiatkowski
7 Apr 2023 23:17 UTC
7
points
0
comments
1
min read
LW
link
[Question]
Is it correct to frame alignment as “programming a good philosophy of meaning”?
Util
7 Apr 2023 23:16 UTC
2
points
3
comments
1
min read
LW
link
Select Agent Specifications as Natural Abstractions
lukemarks
7 Apr 2023 23:16 UTC
19
points
3
comments
5
min read
LW
link
n=3 AI Risk Quick Math and Reasoning
lionhearted (Sebastian Marshall)
7 Apr 2023 20:27 UTC
6
points
3
comments
4
min read
LW
link
[Question]
What are good alternatives to Predictionbook for personal prediction tracking? Edited: I originally thought it was down but it was just 500 until I though of clearing cookies.
sortega
7 Apr 2023 19:18 UTC
4
points
4
comments
1
min read
LW
link
Environments for Measuring Deception, Resource Acquisition, and Ethical Violations
Dan H
7 Apr 2023 18:40 UTC
51
points
2
comments
2
min read
LW
link
(arxiv.org)
Superintelligence Is Not Omniscience
Jeffrey Heninger
7 Apr 2023 16:30 UTC
15
points
20
comments
7
min read
LW
link
(aiimpacts.org)
An ‘AGI Emergency Eject Criteria’ consensus could be really useful.
tcelferact
7 Apr 2023 16:21 UTC
5
points
0
comments
1
min read
LW
link
Reliability, Security, and AI risk: Notes from infosec textbook chapter 1
Akash
7 Apr 2023 15:47 UTC
34
points
1
comment
4
min read
LW
link
Pre-registering a study
Robert_AIZI
7 Apr 2023 15:46 UTC
10
points
0
comments
6
min read
LW
link
(aizi.substack.com)
Live discussion at Eastercon
Douglas_Reay
7 Apr 2023 15:25 UTC
5
points
0
comments
1
min read
LW
link
[Question]
ChatGTP “Writing ” News Stories for The Guardian?
jmh
7 Apr 2023 12:16 UTC
1
point
4
comments
1
min read
LW
link
Storyteller’s convention, 2223 A.D.
plex
7 Apr 2023 11:54 UTC
8
points
0
comments
2
min read
LW
link
Back to top
Next