Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
CBiddulph
Karma:
632
All
Posts
Comments
New
Top
Old
CBiddulph’s Shortform
CBiddulph
Jan 30, 2025, 9:35 PM
4
points
18
comments
1
min read
LW
link
[Question]
Why not train reasoning models with RLHF?
CBiddulph
Jan 30, 2025, 7:58 AM
4
points
4
comments
1
min read
LW
link
Worries about latent reasoning in LLMs
CBiddulph
Jan 20, 2025, 9:09 AM
42
points
3
comments
7
min read
LW
link
5 ways to improve CoT faithfulness
CBiddulph
Oct 5, 2024, 8:17 PM
42
points
40
comments
6
min read
LW
link
OpenAI’s Sora is an agent
CBiddulph
Feb 16, 2024, 7:35 AM
96
points
25
comments
4
min read
LW
link
Is Metaethics Unnecessary Given Intent-Aligned AI?
CBiddulph
Sep 2, 2023, 9:48 AM
10
points
0
comments
7
min read
LW
link
Preparing for AI-assisted alignment research: we need data!
CBiddulph
Jan 17, 2023, 3:28 AM
31
points
3
comments
1
min read
LW
link
The Rational Utilitarian Love Movement (A Historical Retrospective)
CBiddulph
Nov 3, 2022, 7:11 AM
3
points
0
comments
1
min read
LW
link
Back to top
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel