Archive
Sequences
About
Search
Log In
Home
Featured
All
Tags
Recent
Comments
Questions
Events
Shortform
Alignment Forum
AF Comments
RSS
New
Hot
Active
Old
Page
2
[Question]
Is the ethics of interaction with primitive peoples already solved?
StanislavKrym
Apr 11, 2025, 2:56 PM
−4
points
0
comments
1
min read
LW
link
[Question]
How familiar is the Lesswrong community as a whole with the concept of Reward-modelling?
Oxidize
Apr 9, 2025, 11:33 PM
1
point
8
comments
1
min read
LW
link
[Question]
What faithfulness metrics should general claims about CoT faithfulness be based upon?
Rauno Arike
Apr 8, 2025, 3:27 PM
24
points
0
comments
4
min read
LW
link
[Question]
Are there any (semi-)detailed future scenarios where we win?
Jan Betley
Apr 7, 2025, 7:13 PM
15
points
3
comments
1
min read
LW
link
[Question]
What are the fundamental differences between teaching the AIs and humans?
StanislavKrym
Apr 6, 2025, 6:17 PM
3
points
0
comments
1
min read
LW
link
[Question]
LessWrong merch?
Brendan Long
Apr 3, 2025, 9:51 PM
23
points
2
comments
1
min read
LW
link
[Question]
Why do many people who care about AI Safety not clearly endorse PauseAI?
humnrdble
Mar 30, 2025, 6:06 PM
45
points
41
comments
2
min read
LW
link
[Question]
Does the AI control agenda broadly rely on no FOOM being possible?
Noosphere89
Mar 29, 2025, 7:38 PM
22
points
3
comments
1
min read
LW
link
[Question]
Share AI Safety Ideas: Both Crazy and Not. №2
ank
Mar 28, 2025, 5:22 PM
2
points
10
comments
1
min read
LW
link
[Question]
How many times faster can the AGI advance the science than humans do?
StanislavKrym
Mar 28, 2025, 3:16 PM
0
points
0
comments
1
min read
LW
link
[Question]
Is AGI actually that likely to take off given the world energy consumption?
StanislavKrym
Mar 27, 2025, 11:13 PM
2
points
2
comments
1
min read
LW
link
[Question]
Would it be effective to learn a language to improve cognition?
Hruss
Mar 26, 2025, 10:17 AM
9
points
7
comments
1
min read
LW
link
[Question]
Should I fundraise for open source search engine?
samuelshadrach
Mar 23, 2025, 1:04 PM
−11
points
2
comments
1
min read
LW
link
[Question]
Urgency in the ITN framework
Shaïman
Mar 22, 2025, 6:16 PM
0
points
2
comments
1
min read
LW
link
[Question]
Any mistakes in my understanding of Transformers?
Kallistos
Mar 21, 2025, 12:34 AM
3
points
7
comments
1
min read
LW
link
[Question]
How far along Metr’s law can AI start automating or helping with alignment research?
Christopher King
Mar 20, 2025, 3:58 PM
20
points
21
comments
1
min read
LW
link
[Question]
Seeking: more Sci Fi micro reviews
Yair Halberstadt
Mar 20, 2025, 2:31 PM
7
points
0
comments
1
min read
LW
link
[Question]
Superintelligence Strategy: A Pragmatic Path to… Doom?
Mr Beastly
Mar 19, 2025, 10:30 PM
6
points
0
comments
3
min read
LW
link
[Question]
Why am I getting downvoted on Lesswrong?
Oxidize
Mar 19, 2025, 6:32 PM
6
points
14
comments
1
min read
LW
link
[Question]
What is the theory of change behind writing papers about AI safety?
Kajus
Mar 18, 2025, 12:51 PM
7
points
1
comment
1
min read
LW
link
Previous
Back to top
Next