RSS

tailcalled

Karma: 7,856

Knock­ing Down My AI Op­ti­mist Strawman

tailcalledFeb 8, 2025, 10:52 AM
31 points
3 comments6 min readLW link

My Men­tal Model of AI Op­ti­mist Opinions

tailcalledJan 29, 2025, 6:44 PM
12 points
7 comments1 min readLW link

Evolu­tion’s se­lec­tion tar­get de­pends on your weighting

tailcalledNov 19, 2024, 6:24 PM
23 points
22 comments1 min readLW link

Em­pa­thy/​Sys­tem­iz­ing Quo­tient is a poor/​bi­ased model for the autism/​sex link

tailcalledNov 4, 2024, 9:11 PM
39 points
0 comments7 min readLW link

Bi­nary en­cod­ing as a sim­ple ex­plicit con­struc­tion for superposition

tailcalledOct 12, 2024, 9:18 PM
12 points
0 comments1 min readLW link

Ra­tion­al­ist Gnosticism

tailcalledOct 10, 2024, 9:06 AM
11 points
10 comments3 min readLW link

RLHF is the worst pos­si­ble thing done when fac­ing the al­ign­ment problem

tailcalledSep 19, 2024, 6:56 PM
32 points
10 comments6 min readLW link

[Question] Does life ac­tu­ally lo­cally *in­crease* en­tropy?

tailcalledSep 16, 2024, 8:30 PM
10 points
27 comments1 min readLW link

Why I’m bear­ish on mechanis­tic in­ter­pretabil­ity: the shards are not in the network

tailcalledSep 13, 2024, 5:09 PM
22 points
40 comments1 min readLW link

In defense of tech­nolog­i­cal un­em­ploy­ment as the main AI concern

tailcalledAug 27, 2024, 5:58 PM
44 points
36 comments1 min readLW link

The causal back­bone conjecture

tailcalledAug 17, 2024, 6:50 PM
26 points
0 comments2 min readLW link

Ra­tion­al­ists are miss­ing a core piece for agent-like struc­ture (en­ergy vs in­for­ma­tion over­load)

tailcalledAug 17, 2024, 9:57 AM
62 points
9 comments4 min readLW link

[LDSL#6] When is quan­tifi­ca­tion needed, and when is it hard?

tailcalledAug 13, 2024, 8:39 PM
32 points
0 comments2 min readLW link

[LDSL#5] Com­par­i­son and mag­ni­tude/​diminishment

tailcalledAug 12, 2024, 6:47 PM
21 points
0 comments2 min readLW link

[LDSL#4] Root cause anal­y­sis ver­sus effect size estimation

tailcalledAug 11, 2024, 4:12 PM
29 points
0 comments2 min readLW link

[LDSL#3] In­for­ma­tion-ori­en­ta­tion is in ten­sion with mag­ni­tude-orientation

tailcalledAug 10, 2024, 9:58 PM
33 points
2 comments3 min readLW link

[LDSL#2] La­tent vari­able mod­els, net­work mod­els, and lin­ear diffu­sion of sparse lognormals

tailcalledAug 9, 2024, 7:57 PM
26 points
2 comments3 min readLW link

[LDSL#1] Perfor­mance op­ti­miza­tion as a metaphor for life

tailcalledAug 8, 2024, 4:16 PM
31 points
6 comments5 min readLW link

[LDSL#0] Some episte­molog­i­cal conundrums

tailcalledAug 7, 2024, 7:52 PM
54 points
11 comments10 min readLW link

Yann LeCun: We only de­sign ma­chines that min­i­mize costs [there­fore they are safe]

tailcalledJun 15, 2024, 5:25 PM
19 points
8 comments1 min readLW link
(twitter.com)