Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Evan_Gaensbauer
Karma:
1,797
All
Posts
Comments
New
Top
Old
Page
1
Chinese Researchers Crack ChatGPT: Replicating OpenAI’s Advanced AI Model
Evan_Gaensbauer
5 Jan 2025 3:50 UTC
−8
points
1
comment
1
min read
LW
link
(www.geeky-gadgets.com)
Human Biodiversity (Part 4: Astral Codex Ten)
Evan_Gaensbauer
3 Nov 2024 4:20 UTC
−13
points
6
comments
1
min read
LW
link
(reflectivealtruism.com)
Catalogue of POLITICO Reports and Other Cited Articles on Effective Altruism and AI Safety Connections in Washington, DC
Evan_Gaensbauer
21 Jan 2024 2:15 UTC
4
points
0
comments
1
min read
LW
link
(docs.google.com)
[Question]
Are there high-quality surveys available detailing the rates of polyamory among Americans age 18-45 in metropolitan areas in the United States?
Evan_Gaensbauer
18 Jan 2024 23:50 UTC
23
points
0
comments
1
min read
LW
link
[Question]
What evidence is there for (or against) theories about the extent to which effective altruist interests motivated the ouster of Sam Altman last year?
Evan_Gaensbauer
18 Jan 2024 5:14 UTC
10
points
0
comments
1
min read
LW
link
Changes in Community Dynamics: A Follow-Up to ‘The Berkeley Community & the Rest of Us’
Evan_Gaensbauer
9 Jul 2022 1:44 UTC
21
points
6
comments
4
min read
LW
link
[Question]
Is there any way someone could post about public policy relating to abortion access (or another sensitive subject) on LessWrong without getting super downvoted?
Evan_Gaensbauer
28 Jun 2022 5:45 UTC
18
points
20
comments
1
min read
LW
link
[Question]
What are your recommendations for technical AI alignment podcasts?
Evan_Gaensbauer
11 May 2022 21:52 UTC
5
points
4
comments
1
min read
LW
link
[Question]
What are the numbers in mind for the super-short AGI timelines so many long-termists are alarmed about?
Evan_Gaensbauer
21 Apr 2022 23:32 UTC
22
points
14
comments
1
min read
LW
link
[Question]
Is Anyone Else Seeking to Help Community Members in Ukraine Who Make Refugee Claims?
Evan_Gaensbauer
27 Feb 2022 0:51 UTC
52
points
5
comments
1
min read
LW
link
Increased Availability and Willingness for Deployment of Resources for Effective Altruism and Long-Termism
Evan_Gaensbauer
29 Dec 2021 20:30 UTC
33
points
2
comments
2
min read
LW
link
Vancouver Winter Solstice Meetup
Evan_Gaensbauer
15 Dec 2021 0:11 UTC
6
points
0
comments
1
min read
LW
link
[Question]
Why do governments refer to existential risks primarily in terms of national security?
Evan_Gaensbauer
13 Dec 2021 1:05 UTC
3
points
3
comments
1
min read
LW
link
[Question]
[Resolved] Who else prefers “AI alignment” to “AI safety?”
Evan_Gaensbauer
13 Dec 2021 0:35 UTC
5
points
8
comments
1
min read
LW
link
[Question]
Does the Berkeley Existential Risk Initiative (self-)identify as an EA-aligned organization?
Evan_Gaensbauer
2 Jul 2020 17:38 UTC
10
points
10
comments
1
min read
LW
link
[Question]
How Much Do Different Users Really Care About Upvotes?
Evan_Gaensbauer
22 Jul 2019 2:31 UTC
14
points
12
comments
3
min read
LW
link
[Question]
How Can Rationalists Join Other Communities Interested in Truth-Seeking?
Evan_Gaensbauer
16 Jul 2019 3:29 UTC
23
points
9
comments
1
min read
LW
link
Evidence for Connection Theory
Evan_Gaensbauer
28 May 2019 17:06 UTC
14
points
13
comments
1
min read
LW
link
(www.scribd.com)
[Question]
Was CFAR always intended to be a distinct organization from MIRI?
Evan_Gaensbauer
27 May 2019 16:58 UTC
7
points
3
comments
1
min read
LW
link
Getting Out of the Filter Bubble Outside Your Filter Bubble
Evan_Gaensbauer
20 May 2019 0:15 UTC
21
points
29
comments
6
min read
LW
link
Back to top
Next