RSS

eggsyntax

Karma: 1,029

AI safety & alignment researcher

LLMs Look In­creas­ingly Like Gen­eral Reasoners

eggsyntax8 Nov 2024 23:47 UTC
90 points
45 comments3 min readLW link

AIS ter­minol­ogy pro­posal: stan­dard­ize terms for prob­a­bil­ity ranges

eggsyntax30 Aug 2024 15:43 UTC
30 points
12 comments2 min readLW link

LLM Gen­er­al­ity is a Timeline Crux

eggsyntax24 Jun 2024 12:52 UTC
217 points
119 comments7 min readLW link

Lan­guage Models Model Us

eggsyntax17 May 2024 21:00 UTC
156 points
55 comments7 min readLW link

Use­ful start­ing code for interpretability

eggsyntax13 Feb 2024 23:13 UTC
25 points
2 comments1 min readLW link

eggsyn­tax’s Shortform

eggsyntax13 Jan 2024 22:34 UTC
2 points
111 comments1 min readLW link