Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
AnnaSalamon
Karma:
17,918
All
Posts
Comments
New
Top
Old
Page
1
Ayn Rand’s model of “living money”; and an upside of burnout
AnnaSalamon
16 Nov 2024 2:59 UTC
150
points
34
comments
5
min read
LW
link
Scissors Statements for President?
AnnaSalamon
6 Nov 2024 10:38 UTC
107
points
31
comments
1
min read
LW
link
Believing In
AnnaSalamon
8 Feb 2024 7:06 UTC
230
points
51
comments
13
min read
LW
link
[Question]
Which parts of the existing internet are already likely to be in (GPT-5/other soon-to-be-trained LLMs)’s training corpus?
AnnaSalamon
29 Mar 2023 5:17 UTC
49
points
2
comments
1
min read
LW
link
[Question]
Are there specific books that it might slightly help alignment to have on the internet?
AnnaSalamon
29 Mar 2023 5:08 UTC
77
points
25
comments
1
min read
LW
link
What should you change in response to an “emergency”? And AI risk
AnnaSalamon
18 Jul 2022 1:11 UTC
336
points
60
comments
6
min read
LW
link
1
review
Comment reply: my low-quality thoughts on why CFAR didn’t get farther with a “real/efficacious art of rationality”
AnnaSalamon
9 Jun 2022 2:12 UTC
260
points
63
comments
17
min read
LW
link
1
review
Narrative Syncing
AnnaSalamon
1 May 2022 1:48 UTC
118
points
48
comments
7
min read
LW
link
1
review
The feeling of breaking an Overton window
AnnaSalamon
17 Feb 2021 5:31 UTC
131
points
29
comments
1
min read
LW
link
1
review
“PR” is corrosive; “reputation” is not.
AnnaSalamon
14 Feb 2021 3:32 UTC
318
points
95
comments
2
min read
LW
link
3
reviews
[Question]
Where do (did?) stable, cooperative institutions come from?
AnnaSalamon
3 Nov 2020 22:14 UTC
151
points
72
comments
4
min read
LW
link
Reality-Revealing and Reality-Masking Puzzles
AnnaSalamon
16 Jan 2020 16:15 UTC
264
points
57
comments
13
min read
LW
link
1
review
We run the Center for Applied Rationality, AMA
AnnaSalamon
19 Dec 2019 16:34 UTC
108
points
324
comments
1
min read
LW
link
AnnaSalamon’s Shortform
AnnaSalamon
25 Jul 2019 5:24 UTC
20
points
12
comments
1
min read
LW
link
“Flinching away from truth” is often about *protecting* the epistemology
AnnaSalamon
20 Dec 2016 18:39 UTC
233
points
58
comments
7
min read
LW
link
Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”
AnnaSalamon
12 Dec 2016 19:39 UTC
65
points
38
comments
5
min read
LW
link
CFAR’s new mission statement (on our website)
AnnaSalamon
10 Dec 2016 8:37 UTC
15
points
14
comments
1
min read
LW
link
(www.rationality.org)
CFAR’s new focus, and AI Safety
AnnaSalamon
3 Dec 2016 18:09 UTC
51
points
88
comments
3
min read
LW
link
On the importance of Less Wrong, or another single conversational locus
AnnaSalamon
27 Nov 2016 17:13 UTC
176
points
365
comments
4
min read
LW
link
Several free CFAR summer programs on rationality and AI safety
AnnaSalamon
14 Apr 2016 2:35 UTC
29
points
14
comments
2
min read
LW
link
Back to top
Next