RSS

Rauno Arike

Karma: 189

Rauno’s Shortform

Rauno Arike15 Nov 2024 12:08 UTC
3 points
6 comments1 min readLW link

A Dialogue on De­cep­tive Align­ment Risks

Rauno Arike25 Sep 2024 16:10 UTC
11 points
0 comments18 min readLW link

[In­terim re­search re­port] Eval­u­at­ing the Goal-Direct­ed­ness of Lan­guage Models

18 Jul 2024 18:19 UTC
39 points
4 comments11 min readLW link

Early Ex­per­i­ments in Re­ward Model In­ter­pre­ta­tion Us­ing Sparse Autoencoders

3 Oct 2023 7:45 UTC
17 points
0 comments5 min readLW link

Ex­plor­ing the Lot­tery Ticket Hypothesis

Rauno Arike25 Apr 2023 20:06 UTC
54 points
3 comments11 min readLW link

[Question] Re­quest for Align­ment Re­search Pro­ject Recommendations

Rauno Arike3 Sep 2022 15:29 UTC
10 points
2 comments1 min readLW link

Coun­ter­ing ar­gu­ments against work­ing on AI safety

Rauno Arike20 Jul 2022 18:23 UTC
7 points
2 comments7 min readLW link

Clar­ify­ing the con­fu­sion around in­ner alignment

Rauno Arike13 May 2022 23:05 UTC
31 points
0 comments11 min readLW link