RSS

Garrett Baker

Karma: 4,821

I have signed no contracts or agreements whose existence I cannot mention.

They thought they found in numbers, more than in fire, earth, or water, many resemblances to things which are and become; thus such and such an attribute of numbers is justice, another is soul and mind, another is opportunity, and so on; and again they saw in numbers the attributes and ratios of the musical scales. Since, then, all other things seemed in their whole nature to be assimilated to numbers, while numbers seemed to be the first things in the whole of nature, they supposed the elements of numbers to be the elements of all things, and the whole heaven to be a musical scale and a number.

Metaph. A. 5, 985 b 27–986 a 2.

RESCHEDULED Lighthaven Se­quences Read­ing Group #16 (Satur­day 12/​28)

20 Dec 2024 6:31 UTC
19 points
8 comments2 min readLW link

What and Why: Devel­op­men­tal In­ter­pretabil­ity of Re­in­force­ment Learning

Garrett Baker9 Jul 2024 14:09 UTC
68 points
4 comments6 min readLW link

On Com­plex­ity Science

Garrett Baker5 Apr 2024 2:24 UTC
51 points
19 comments4 min readLW link

So You Created a So­ciopath—New Book An­nounce­ment!

Garrett Baker1 Apr 2024 18:02 UTC
52 points
3 comments1 min readLW link

An­nounc­ing Suffer­ing For Good

Garrett Baker1 Apr 2024 17:08 UTC
75 points
5 comments1 min readLW link

Neu­ro­science and Alignment

Garrett Baker18 Mar 2024 21:09 UTC
40 points
25 comments2 min readLW link

Epoch wise crit­i­cal pe­ri­ods, and sin­gu­lar learn­ing theory

Garrett Baker14 Dec 2023 20:55 UTC
16 points
1 comment5 min readLW link

A bet on crit­i­cal pe­ri­ods in neu­ral networks

6 Nov 2023 23:21 UTC
24 points
1 comment6 min readLW link

When and why should you use the Kelly crite­rion?

5 Nov 2023 23:26 UTC
27 points
25 comments16 min readLW link

Sin­gu­lar learn­ing the­ory and bridg­ing from ML to brain emulations

1 Nov 2023 21:31 UTC
26 points
16 comments29 min readLW link

My hopes for al­ign­ment: Sin­gu­lar learn­ing the­ory and whole brain emulation

Garrett Baker25 Oct 2023 18:31 UTC
61 points
5 comments12 min readLW link

AI pres­i­dents dis­cuss AI al­ign­ment agendas

9 Sep 2023 18:55 UTC
217 points
23 comments1 min readLW link
(www.youtube.com)

Ac­ti­va­tion ad­di­tions in a small resi­d­ual network

Garrett Baker22 May 2023 20:28 UTC
22 points
4 comments3 min readLW link

Col­lec­tive Identity

18 May 2023 9:00 UTC
59 points
12 comments8 min readLW link

Ac­ti­va­tion ad­di­tions in a sim­ple MNIST network

Garrett Baker18 May 2023 2:49 UTC
26 points
0 comments2 min readLW link

Value drift threat models

Garrett Baker12 May 2023 23:03 UTC
27 points
4 comments5 min readLW link

[Question] What con­straints does deep learn­ing place on al­ign­ment plans?

Garrett Baker3 May 2023 20:40 UTC
9 points
0 comments1 min readLW link

Pes­simistic Shard Theory

Garrett Baker25 Jan 2023 0:59 UTC
72 points
13 comments3 min readLW link

Perform­ing an SVD on a time-se­ries ma­trix of gra­di­ent up­dates on an MNIST net­work pro­duces 92.5 sin­gu­lar values

Garrett Baker21 Dec 2022 0:44 UTC
9 points
10 comments5 min readLW link

Don’t de­sign agents which ex­ploit ad­ver­sar­ial inputs

18 Nov 2022 1:48 UTC
72 points
64 comments12 min readLW link