RSS

catubc

Karma: 159

The case for stop­ping AI safety research

catubcMay 23, 2024, 3:55 PM
53 points
38 comments1 min readLW link

In­tent-al­igned AI sys­tems de­plete hu­man agency: the need for agency foun­da­tions re­search in AI safety

catubcMay 31, 2023, 9:18 PM
26 points
4 comments11 min readLW link

Red-team­ing AI-safety con­cepts that rely on sci­ence metaphors

catubcMar 16, 2023, 6:52 AM
5 points
4 comments5 min readLW link

AGIs may value in­trin­sic re­wards more than ex­trin­sic ones

catubcNov 17, 2022, 9:49 PM
8 points
6 comments4 min readLW link

LLMs may cap­ture key com­po­nents of hu­man agency

catubcNov 17, 2022, 8:14 PM
27 points
0 comments4 min readLW link