RSS

StanislavKrym

Karma: −17

Do we want too much from a po­ten­tially godlike AGI?

StanislavKrymApr 11, 2025, 11:33 PM
−1 points
0 comments2 min readLW link

[Question] Is the ethics of in­ter­ac­tion with prim­i­tive peo­ples already solved?

StanislavKrymApr 11, 2025, 2:56 PM
−4 points
0 comments1 min readLW link

[Question] What are the fun­da­men­tal differ­ences be­tween teach­ing the AIs and hu­mans?

StanislavKrymApr 6, 2025, 6:17 PM
3 points
0 comments1 min readLW link

What does al­ign­ing AI to an ide­ol­ogy mean for true al­ign­ment?

StanislavKrymMar 30, 2025, 3:12 PM
1 point
0 comments8 min readLW link