RSS

abramdemski

Karma: 19,359

Un­der­stand­ing Trust: Overview Presentations

abramdemskiApr 16, 2025, 6:08 PM
22 points
0 comments1 min readLW link

Un­der­stand­ing Trust—Overview Presentations

abramdemskiApr 16, 2025, 6:05 PM
13 points
0 comments1 min readLW link

Dream, Truth, & Good

abramdemskiFeb 24, 2025, 4:59 PM
50 points
11 comments4 min readLW link

Judge­ments: Merg­ing Pre­dic­tion & Evidence

abramdemskiFeb 23, 2025, 7:35 PM
103 points
5 comments6 min readLW link

[Question] Have LLMs Gen­er­ated Novel In­sights?

Feb 23, 2025, 6:22 PM
155 points
36 comments2 min readLW link

Anti-Slop In­ter­ven­tions?

abramdemskiFeb 4, 2025, 7:50 PM
76 points
33 comments6 min readLW link

Lec­ture Series on Tiling Agents #2

abramdemskiJan 20, 2025, 9:02 PM
16 points
0 comments1 min readLW link

Lec­ture Series on Tiling Agents

abramdemskiJan 14, 2025, 9:34 PM
38 points
14 comments1 min readLW link

Why Don’t We Just… Shog­goth+Face+Para­phraser?

Nov 19, 2024, 8:53 PM
144 points
58 comments14 min readLW link

AI Craftsmanship

abramdemskiNov 11, 2024, 10:17 PM
66 points
7 comments4 min readLW link

o1 is a bad idea

abramdemskiNov 11, 2024, 9:20 PM
161 points
39 comments2 min readLW link

Seek­ing Collaborators

abramdemskiNov 1, 2024, 5:13 PM
62 points
15 comments7 min readLW link

Com­plete Feedback

abramdemskiNov 1, 2024, 4:58 PM
25 points
8 comments3 min readLW link

[Question] Why is o1 so de­cep­tive?

abramdemskiSep 27, 2024, 5:27 PM
180 points
24 comments3 min readLW link

For­mal­iz­ing the In­for­mal (event in­vite)

abramdemskiSep 10, 2024, 7:22 PM
42 points
0 comments1 min readLW link

In Defense of Open-Minded UDT

abramdemskiAug 12, 2024, 6:27 PM
79 points
28 comments11 min readLW link

Leav­ing MIRI, Seek­ing Funding

abramdemskiAug 8, 2024, 6:32 PM
264 points
19 comments2 min readLW link

Cir­cu­lar Reasoning

abramdemskiAug 5, 2024, 6:10 PM
91 points
37 comments8 min readLW link

LLMs for Align­ment Re­search: a safety pri­or­ity?

abramdemskiApr 4, 2024, 8:03 PM
145 points
24 comments11 min readLW link

Modern Trans­form­ers are AGI, and Hu­man-Level

abramdemskiMar 26, 2024, 5:46 PM
219 points
87 comments5 min readLW link