RSS

Igor Ivanov

Karma: 488

LLMs can strate­gi­cally de­ceive while do­ing gain-of-func­tion re­search

Igor IvanovJan 24, 2024, 3:45 PM
33 points
4 comments11 min readLW link

Psy­chol­ogy of AI doomers and AI optimists

Igor IvanovDec 28, 2023, 5:55 PM
3 points
0 comments22 min readLW link

5 psy­cholog­i­cal rea­sons for dis­miss­ing x-risks from AGI

Igor IvanovOct 26, 2023, 5:21 PM
24 points
6 comments4 min readLW link

Let’s talk about Im­pos­tor syn­drome in AI safety

Igor IvanovSep 22, 2023, 1:51 PM
29 points
4 comments3 min readLW link

Im­pend­ing AGI doesn’t make ev­ery­thing else unimportant

Igor IvanovSep 4, 2023, 12:34 PM
29 points
12 comments5 min readLW link

6 non-ob­vi­ous men­tal health is­sues spe­cific to AI safety

Igor IvanovAug 18, 2023, 3:46 PM
147 points
24 comments4 min readLW link

What is ev­ery­one do­ing in AI governance

Igor IvanovJul 8, 2023, 3:16 PM
11 points
0 comments5 min readLW link

A cou­ple of ques­tions about Con­jec­ture’s Cog­ni­tive Emu­la­tion proposal

Igor IvanovApr 11, 2023, 2:05 PM
30 points
1 comment3 min readLW link

How do we al­ign hu­mans and what does it mean for the new Con­jec­ture’s strategy

Igor IvanovMar 28, 2023, 5:54 PM
7 points
4 comments7 min readLW link

Prob­lems of peo­ple new to AI safety and my pro­ject ideas to miti­gate them

Igor IvanovMar 1, 2023, 9:09 AM
38 points
4 comments7 min readLW link

Emo­tional at­tach­ment to AIs opens doors to problems

Igor IvanovJan 22, 2023, 8:28 PM
20 points
10 comments4 min readLW link

AI se­cu­rity might be helpful for AI alignment

Igor IvanovJan 6, 2023, 8:16 PM
36 points
1 comment2 min readLW link

Fear miti­gated the nu­clear threat, can it do the same to AGI risks?

Igor IvanovDec 9, 2022, 10:04 AM
6 points
8 comments5 min readLW link