RSS

cousin_it

Karma: 30,403

https://​​vladimirslepnev.me

Us­ing the uni­ver­sal prior for log­i­cal uncertainty

cousin_it16 Jun 2018 14:11 UTC
0 points
0 comments1 min readLW link
(www.greaterwrong.com)

Un­der­stand­ing is translation

cousin_it28 May 2018 13:56 UTC
92 points
23 comments1 min readLW link

An­nounce­ment: AI al­ign­ment prize round 2 win­ners and next round

cousin_it16 Apr 2018 3:08 UTC
64 points
29 comments2 min readLW link

Us­ing the uni­ver­sal prior for log­i­cal un­cer­tainty (re­tracted)

cousin_it28 Feb 2018 13:07 UTC
15 points
13 comments2 min readLW link

UDT as a Nash Equilibrium

cousin_it6 Feb 2018 14:08 UTC
18 points
17 comments1 min readLW link

Be­ware ar­gu­ments from possibility

cousin_it3 Feb 2018 10:21 UTC
6 points
14 comments1 min readLW link

An experiment

cousin_it31 Jan 2018 12:20 UTC
12 points
11 comments1 min readLW link

Biolog­i­cal hu­mans and the ris­ing tide of AI

cousin_it29 Jan 2018 16:04 UTC
29 points
23 comments1 min readLW link

A sim­pler way to think about pos­i­tive test bias

cousin_it22 Jan 2018 9:38 UTC
16 points
10 comments1 min readLW link

How the LW2.0 front page could be bet­ter at in­cen­tiviz­ing good content

cousin_it21 Jan 2018 16:11 UTC
18 points
14 comments1 min readLW link

Be­ware of black boxes in AI al­ign­ment research

cousin_it18 Jan 2018 15:07 UTC
39 points
10 comments1 min readLW link

An­nounce­ment: AI al­ign­ment prize win­ners and next round

cousin_it15 Jan 2018 14:33 UTC
81 points
68 comments2 min readLW link

An­nounc­ing the AI Align­ment Prize

cousin_it4 Nov 2017 11:44 UTC
1 point
0 comments1 min readLW link
(www.lesserwrong.com)

An­nounc­ing the AI Align­ment Prize

cousin_it3 Nov 2017 15:47 UTC
95 points
78 comments1 min readLW link

An­nounc­ing the AI Align­ment Prize

cousin_it3 Nov 2017 15:45 UTC
12 points
12 comments1 min readLW link

The Limits of Cor­rect­ness, by Bryan Cantwell Smith [pdf]

cousin_it25 Aug 2017 11:36 UTC
5 points
3 comments1 min readLW link
(www.student.cs.uwaterloo.ca)

Us­ing modal fixed points to for­mal­ize log­i­cal causality

cousin_it24 Aug 2017 14:33 UTC
21 points
10 comments4 min readLW link

Against lone wolf self-improvement

cousin_it7 Jul 2017 15:31 UTC
46 points
73 comments2 min readLW link

Steel­man­ning the Chi­nese Room Argument

cousin_it6 Jul 2017 9:37 UTC
10 points
313 comments1 min readLW link

A cheat­ing ap­proach to the tiling agents problem

cousin_it30 Jun 2017 13:56 UTC
4 points
3 comments3 min readLW link