RSS

Stuart_Armstrong

Karma: 17,855

Align­ment can im­prove gen­er­al­i­sa­tion through more ro­bustly do­ing what a hu­man wants—CoinRun example

Stuart_Armstrong21 Nov 2023 11:41 UTC
67 points
9 comments3 min readLW link

How toy mod­els of on­tol­ogy changes can be misleading

Stuart_Armstrong21 Oct 2023 21:13 UTC
42 points
0 comments2 min readLW link

Differ­ent views of al­ign­ment have differ­ent con­se­quences for im­perfect methods

Stuart_Armstrong28 Sep 2023 16:31 UTC
31 points
0 comments1 min readLW link

Avoid­ing xrisk from AI doesn’t mean fo­cus­ing on AI xrisk

Stuart_Armstrong2 May 2023 19:27 UTC
64 points
7 comments3 min readLW link

What is a defi­ni­tion, how can it be ex­trap­o­lated?

Stuart_Armstrong14 Mar 2023 18:08 UTC
34 points
5 comments7 min readLW link

You’re not a simu­la­tion, ’cause you’re hallucinating

Stuart_Armstrong21 Feb 2023 12:12 UTC
25 points
6 comments1 min readLW link