RSS

Christopher King

Karma: 698

@theking@mathstodon.xyz

LDT (and ev­ery­thing else) can be irrational

Christopher King6 Nov 2024 4:05 UTC
3 points
6 comments2 min readLW link

Acausal Now: We could to­tally acausally bar­gain with aliens at our cur­rent tech level if desired

Christopher King9 Aug 2023 0:50 UTC
1 point
5 comments4 min readLW link

Ne­cro­mancy’s un­in­tended con­se­quences.

Christopher King9 Aug 2023 0:08 UTC
−6 points
2 comments2 min readLW link

How do low level hy­pothe­ses con­strain high level ones? The mys­tery of the dis­ap­pear­ing di­a­mond.

Christopher King11 Jul 2023 19:27 UTC
17 points
11 comments2 min readLW link

Challenge pro­posal: small­est pos­si­ble self-hard­en­ing back­door for RLHF

Christopher King29 Jun 2023 16:56 UTC
7 points
0 comments2 min readLW link

An­throp­i­cally Blind: the an­thropic shadow is re­flec­tively inconsistent

Christopher King29 Jun 2023 2:36 UTC
43 points
40 comments10 min readLW link

Solomonoff in­duc­tion still works if the uni­verse is un­com­putable, and its use­ful­ness doesn’t re­quire know­ing Oc­cam’s razor

Christopher King18 Jun 2023 1:52 UTC
38 points
28 comments4 min readLW link

De­mys­tify­ing Born’s rule

Christopher King14 Jun 2023 3:16 UTC
5 points
26 comments3 min readLW link

Cur­rent AI harms are also sci-fi

Christopher King8 Jun 2023 17:49 UTC
26 points
3 comments1 min readLW link

In­fer­ence from a Math­e­mat­i­cal De­scrip­tion of an Ex­ist­ing Align­ment Re­search: a pro­posal for an outer al­ign­ment re­search program

Christopher King2 Jun 2023 21:54 UTC
7 points
4 comments16 min readLW link

The un­spo­ken but ridicu­lous as­sump­tion of AI doom: the hid­den doom assumption

Christopher King1 Jun 2023 17:01 UTC
−9 points
1 comment3 min readLW link

[Question] What pro­jects and efforts are there to pro­mote AI safety re­search?

Christopher King24 May 2023 0:33 UTC
4 points
0 comments1 min readLW link

See­ing Ghosts by GPT-4

Christopher King20 May 2023 0:11 UTC
−13 points
0 comments1 min readLW link

We are mis­al­igned: the sad­den­ing idea that most of hu­man­ity doesn’t in­trin­si­cally care about x-risk, even on a per­sonal level

Christopher King19 May 2023 16:12 UTC
3 points
5 comments2 min readLW link

Pro­posal: we should start refer­ring to the risk from un­al­igned AI as a type of *ac­ci­dent risk*

Christopher King16 May 2023 15:18 UTC
22 points
6 comments2 min readLW link

PCAST Work­ing Group on Gen­er­a­tive AI In­vites Public Input

Christopher King13 May 2023 22:49 UTC
7 points
0 comments1 min readLW link
(terrytao.wordpress.com)

The way AGI wins could look very stupid

Christopher King12 May 2023 16:34 UTC
48 points
22 comments1 min readLW link

Are healthy choices effec­tive for im­prov­ing live ex­pec­tancy any­more?

Christopher King8 May 2023 21:25 UTC
6 points
4 comments1 min readLW link

Acausal trade nat­u­rally re­sults in the Nash bar­gain­ing solution

Christopher King8 May 2023 18:13 UTC
3 points
0 comments4 min readLW link

For­mal­iz­ing the “AI x-risk is un­likely be­cause it is ridicu­lous” argument

Christopher King3 May 2023 18:56 UTC
48 points
17 comments3 min readLW link