RSS

Mikhail Samin

Karma: 1,436

My name is Mikhail Samin (diminutive Misha, @Mihonarium on Twitter, @misha in Telegram).

Humanity’s future can be huge and awesome; losing it would mean our lightcone (and maybe the universe) losing most of its potential value.

My research is currently focused on AI governance and improving the understanding of AI and AI risks among stakeholders. I also have takes on what seems to me to be the very obvious shallow stuff about the technical AI notkilleveryoneism; but many AI Safety researchers told me our conversations improved their understanding of the alignment problem.

I believe a capacity for global regulation is necessary to mitigate the risks posed by future general AI systems. I’m happy to talk to policymakers and researchers about ensuring AI benefits society.

I took the Giving What We Can pledge to donate at least 10% of my income for the rest of my life or until the day I retire (why?).

In the past, I’ve launched the most funded crowdfunding campaign in the history of Russia (it was to print HPMOR! we printed 21 000 copies =63k books) and founded audd.io, which allowed me to donate >$100k to EA causes, including >$60k to MIRI.

[Less important: I’ve also started a project to translate 80,000 Hours, a career guide that helps to find a fulfilling career that does good, into Russian. The impact and the effectiveness aside, for a year, I was the head of the Russian Pastafarian Church: a movement claiming to be a parody religion, with 200 000 members in Russia at the time, trying to increase separation between religious organisations and the state. I was a political activist and a human rights advocate. I studied relevant Russian and international law and wrote appeals that won cases against the Russian government in courts; I was able to protect people from unlawful police action. I co-founded the Moscow branch of the “Vesna” democratic movement, coordinated election observers in a Moscow district, wrote dissenting opinions for members of electoral commissions, helped Navalny’s Anti-Corruption Foundation, helped Telegram with internet censorship circumvention, and participated in and organized protests and campaigns. The large-scale goal was to build a civil society and turn Russia into a democracy through nonviolent resistance. This goal wasn’t achieved, but some of the more local campaigns were successful. That felt important and was also mostly fun- except for being detained by the police. I think it’s likely the Russian authorities will imprison me if I ever visit Russia.]

No one has the ball on 1500 Rus­sian olympiad win­ners who’ve re­ceived HPMOR

Mikhail Samin12 Jan 2025 11:43 UTC
74 points
21 comments1 min readLW link

How to Give in to Threats (with­out in­cen­tiviz­ing them)

Mikhail Samin12 Sep 2024 15:55 UTC
54 points
26 comments5 min readLW link

[Question] Can agents co­or­di­nate on ran­dom­ness with­out out­side sources?

Mikhail Samin6 Jul 2024 13:43 UTC
6 points
16 comments1 min readLW link

Claude 3 claims it’s con­scious, doesn’t want to die or be modified

Mikhail Samin4 Mar 2024 23:05 UTC
78 points
115 comments14 min readLW link

FTX ex­pects to re­turn all cus­tomer money; claw­backs may go away

Mikhail Samin14 Feb 2024 3:43 UTC
33 points
1 comment1 min readLW link
(www.nytimes.com)

An EA used de­cep­tive mes­sag­ing to ad­vance their pro­ject; we need mechanisms to avoid de­on­tolog­i­cally du­bi­ous plans

Mikhail Samin13 Feb 2024 23:15 UTC
18 points
1 comment1 min readLW link

NYT is su­ing OpenAI&Microsoft for alleged copy­right in­fringe­ment; some quick thoughts

Mikhail Samin27 Dec 2023 18:44 UTC
42 points
17 comments1 min readLW link

Some quick thoughts on “AI is easy to con­trol”

Mikhail Samin6 Dec 2023 0:58 UTC
15 points
10 comments7 min readLW link

It’s OK to eat shrimp: EAs Make In­valid In­fer­ences About Fish Qualia and Mo­ral Patienthood

Mikhail Samin13 Nov 2023 16:51 UTC
0 points
17 comments1 min readLW link

AI pause/​gov­er­nance ad­vo­cacy might be net-nega­tive, es­pe­cially with­out a fo­cus on ex­plain­ing x-risk

Mikhail Samin27 Aug 2023 23:05 UTC
82 points
9 comments6 min readLW link

Gra­di­ent de­scent might see the di­rec­tion of the op­ti­mum from far away

Mikhail Samin28 Jul 2023 16:19 UTC
68 points
13 comments4 min readLW link

A tran­script of the TED talk by Eliezer Yudkowsky

Mikhail Samin12 Jul 2023 12:12 UTC
105 points
13 comments4 min readLW link

A smart enough LLM might be deadly sim­ply if you run it for long enough

Mikhail Samin5 May 2023 20:49 UTC
19 points
16 comments8 min readLW link

Try to solve the hard parts of the al­ign­ment problem

Mikhail Samin18 Mar 2023 14:55 UTC
54 points
33 comments5 min readLW link

Mikhail Samin’s Shortform

Mikhail Samin7 Feb 2023 15:30 UTC
6 points
24 comments1 min readLW link

[Question] I have thou­sands of copies of HPMOR in Rus­sian. How to use them with the most im­pact?

Mikhail Samin3 Jan 2023 10:21 UTC
26 points
3 comments1 min readLW link

You won’t solve al­ign­ment with­out agent foundations

Mikhail Samin6 Nov 2022 8:07 UTC
27 points
3 comments8 min readLW link