RSS

Martín Soto

Karma: 858

Mathematical Logic grad student, doing AI Safety research for ethical reasons.

Working on conceptual alignment, decision theory, cooperative AI and cause prioritization.

My webpage.

Leave me anonymous feedback.

Up­date­less­ness doesn’t solve most problems

Martín Soto8 Feb 2024 17:30 UTC
125 points
43 comments12 min readLW link

OpenAI re­leases GPT-4o, na­tively in­ter­fac­ing with text, voice and vision

Martín Soto13 May 2024 18:50 UTC
54 points
23 comments1 min readLW link
(openai.com)

Why does gen­er­al­iza­tion work?

Martín Soto20 Feb 2024 17:51 UTC
43 points
16 comments4 min readLW link

Con­flict in Posthu­man Literature

Martín Soto6 Apr 2024 22:26 UTC
39 points
1 comment2 min readLW link
(twitter.com)

Nat­u­ral ab­strac­tions are ob­server-de­pen­dent: a con­ver­sa­tion with John Wentworth

Martín Soto12 Feb 2024 17:28 UTC
38 points
13 comments7 min readLW link

A short cri­tique of Vanessa Kosoy’s PreDCA

Martín Soto13 Nov 2022 16:00 UTC
27 points
8 comments4 min readLW link

Ev­i­den­tial Cor­re­la­tions are Sub­jec­tive, and it might be a problem

Martín Soto7 Mar 2024 18:37 UTC
26 points
6 comments14 min readLW link

The lat­tice of par­tial updatelessness

Martín Soto10 Feb 2024 17:34 UTC
21 points
5 comments5 min readLW link

Sources of ev­i­dence in Alignment

Martín Soto2 Jul 2023 20:38 UTC
20 points
0 comments11 min readLW link

The Align­ment Problems

Martín Soto12 Jan 2023 22:29 UTC
19 points
0 comments4 min readLW link

Quan­ti­ta­tive cruxes in Alignment

Martín Soto2 Jul 2023 20:38 UTC
19 points
0 comments23 min readLW link

Vanessa Kosoy’s PreDCA, distilled

Martín Soto12 Nov 2022 11:38 UTC
17 points
19 comments5 min readLW link

Why are coun­ter­fac­tu­als elu­sive?

Martín Soto3 Mar 2023 20:13 UTC
14 points
6 comments2 min readLW link

Align­ment be­ing im­pos­si­ble might be bet­ter than it be­ing re­ally difficult

Martín Soto25 Jul 2022 23:57 UTC
13 points
2 comments2 min readLW link