RSS

catubc

Karma: 149

The case for stop­ping AI safety research

catubc23 May 2024 15:55 UTC
52 points
38 comments1 min readLW link

In­tent-al­igned AI sys­tems de­plete hu­man agency: the need for agency foun­da­tions re­search in AI safety

catubc31 May 2023 21:18 UTC
26 points
4 comments11 min readLW link

Red-team­ing AI-safety con­cepts that rely on sci­ence metaphors

catubc16 Mar 2023 6:52 UTC
5 points
4 comments5 min readLW link

AGIs may value in­trin­sic re­wards more than ex­trin­sic ones

catubc17 Nov 2022 21:49 UTC
8 points
6 comments4 min readLW link

LLMs may cap­ture key com­po­nents of hu­man agency

catubc17 Nov 2022 20:14 UTC
27 points
0 comments4 min readLW link